Re: NVDA is not reading/working in tables in MS word properly
On Sun, Sep 4, 2022 at 06:01 AM, mr Krit Kumar kedia wrote:
please tell me how this is possible in context of my above text.
- User error, plain and simple. And the number of possible things you could have unwittingly done to the first that you didn't do when creating the second are myriad. That is by far and away the most likely explanation. --
Brian - Windows 10, 64-Bit, Version 21H2, Build 19044
It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.
~ Irving Babbitt
|
|
Re: NVDA is not reading/working in tables in MS word properly
On Sun, Sep 4, 2022 at 03:40 AM, Brian's Mail list account wrote:
Are you saying that if you have fast start set up, then its merely a bit like a sleep mode where all the running information is on the drive and used next boot up?
- Brian, Where in heaven's name have you been?!! The discussions of Fast Startup being a version of hibernation where only the Windows system state is saved to disk, but not the user(s) state(s), have occurred time and time again. And in the age of SSDs, the decrease in boot time using Fast Startup is virtually non-existent. But if Fast Startup is turned on a shutdown followed by a power up is a hibernation sequence for Windows, not a fresh load of the OS from disk. You get back whatever the system state was when you shutdown (and, possibly, any corruption that may have occurred over time in the hibernation file - which is another reason I avoid Fast Startup like the plague). --
Brian - Windows 10, 64-Bit, Version 21H2, Build 19044
It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.
~ Irving Babbitt
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
system means not in brouser sir. regular button like what we see in system tray or objects inside computers apart from websites.
toggle quoted message
Show quoted text
On 04/09/2022, Gene <gsasner@...> wrote: I don't know what you mean by in system.
Gene
On 9/4/2022 7:20 AM, Aravind R wrote:
yes. i was in brouse mode while using office's learning course where we have to drag and drop objects. in system the buttons are able to be dragged and dropped by us.
On 04/09/2022, Gene <gsasner@...> wrote:
Are you on an Internet site or somewhere else where browse mode is being used? I don't think drag and drop can work for a blind user while in browse mode.
Gene
On 9/4/2022 6:59 AM, Aravind R wrote:
i am somehow able to drag and drop elements in system tray using nvda's facility and its add on. But, its not working in my bank's learning course where, where we have to match write statements with correct words.
On 04/09/2022, Brian's Mail list account via groups.io <bglists@...> wrote:
I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a
sighted person drag and drop is intuitive and obvious, but its harder with a
screenreader as you do not have the overview that you do with a working pair
of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field. ----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
-- --
-- nothing is difficult unless you make it appear so.
r. aravind,
manager Department of sales bank of baroda specialised mortgage store, Chennai. mobile no: +91 9940369593, email id : aravind_069@..., aravind.andhrabank@.... aravind.rajendran@....
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
I don't know what you mean by in system.
Gene
toggle quoted message
Show quoted text
On 9/4/2022 7:20 AM, Aravind R wrote: yes. i was in brouse mode while using office's learning course where we have to drag and drop objects. in system the buttons are able to be dragged and dropped by us.
On 04/09/2022, Gene <gsasner@...> wrote:
Are you on an Internet site or somewhere else where browse mode is being used? I don't think drag and drop can work for a blind user while in browse mode.
Gene
On 9/4/2022 6:59 AM, Aravind R wrote:
i am somehow able to drag and drop elements in system tray using nvda's facility and its add on. But, its not working in my bank's learning course where, where we have to match write statements with correct words.
On 04/09/2022, Brian's Mail list account via groups.io <bglists@...> wrote:
I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a
sighted person drag and drop is intuitive and obvious, but its harder with a
screenreader as you do not have the overview that you do with a working pair
of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field. ----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
yes. i was in brouse mode while using office's learning course where we have to drag and drop objects. in system the buttons are able to be dragged and dropped by us.
toggle quoted message
Show quoted text
On 04/09/2022, Gene <gsasner@...> wrote: Are you on an Internet site or somewhere else where browse mode is being used? I don't think drag and drop can work for a blind user while in browse mode.
Gene
On 9/4/2022 6:59 AM, Aravind R wrote:
i am somehow able to drag and drop elements in system tray using nvda's facility and its add on. But, its not working in my bank's learning course where, where we have to match write statements with correct words.
On 04/09/2022, Brian's Mail list account via groups.io <bglists@...> wrote:
I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a
sighted person drag and drop is intuitive and obvious, but its harder with a
screenreader as you do not have the overview that you do with a working pair
of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field. ----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
-- --
-- nothing is difficult unless you make it appear so.
r. aravind,
manager Department of sales bank of baroda specialised mortgage store, Chennai. mobile no: +91 9940369593, email id : aravind_069@..., aravind.andhrabank@.... aravind.rajendran@....
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
Are you on an Internet site or somewhere else where browse mode is being used? I don't think drag and drop can work for a blind user while in browse mode.
Gene
toggle quoted message
Show quoted text
On 9/4/2022 6:59 AM, Aravind R wrote: i am somehow able to drag and drop elements in system tray using nvda's facility and its add on. But, its not working in my bank's learning course where, where we have to match write statements with correct words.
On 04/09/2022, Brian's Mail list account via groups.io <bglists@...> wrote:
I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a
sighted person drag and drop is intuitive and obvious, but its harder with a
screenreader as you do not have the overview that you do with a working pair
of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field. ----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
i am somehow able to drag and drop elements in system tray using nvda's facility and its add on. But, its not working in my bank's learning course where, where we have to match write statements with correct words. On 04/09/2022, Brian's Mail list account via groups.io <bglists@...> wrote: I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a
sighted person drag and drop is intuitive and obvious, but its harder with a
screenreader as you do not have the overview that you do with a working pair
of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field. ----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
-- -- -- nothing is difficult unless you make it appear so. r. aravind, manager Department of sales bank of baroda specialised mortgage store, Chennai. mobile no: +91 9940369593, email id : aravind_069@..., aravind.andhrabank@.... aravind.rajendran@....
|
|
Re: NVDA is not reading/working in tables in MS word properly
Hello Brian sir, As told by you, and given by you the link of issues I tried that a couple of minutes ago. actually, the first solution in that page (to restart your computer) I already tried that en number of times. The other two options also, I tried them but they didn't have any effect on my problem with NVDA. but, I have something to tell you here! After failing in all the methods to repair the issue, I was working on my new word document and there, I was required to make table. again. and there, when I used tables, it was working first class! It was not having any problems as told by me earlier. so, I shifted all my data to a new word document from the document which was having problems. and it had no problem at any cost! so, any of the members please tell me how this is possible in context of my above text.
Regards,
toggle quoted message
Show quoted text
Are you saying that if you have fast start set up, then its meagrely a bit
like a sleep mode where all the running information is on the drive and used
next boot up?
One might have thought that Windows should call it something else then,
since its not obvious to a lot of I know that shut down and restart is not a
complete system reboot.
Brian
--
bglists@...
Sent via blueyonder.(Virgin media)
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.
----- Original Message -----
From: "Brian Vogel" <britechguy@...>
To: <nvda@nvda.groups.io>
Sent: Saturday, September 03, 2022 5:44 PM
Subject: Re: [nvda] NVDA is not reading/working in tables in MS word
properly
On Sat, Sep 3, 2022 at 12:39 PM, mr Krit Kumar kedia wrote:
>
> Can my problem be resolved with the help of an easy table navigator addon?
-
Perhaps, but have you followed the previously offered, The Most Basic
Troubleshooting Steps for Suspected NVDA Issues? (
https://nvda.groups.io/g/nvda/message/81494 ) ?
The only way to figure out what might be wrong is by systematic diagnostic
steps to eliminate possibilities, and the ones in that message are the first
things to try in order to try to rule in or rule out certain things. And if
you have Fast Startup enabled on your computer then do a Windows Restart
from the power menu, not a shutdown.
--
Brian - Windows 10, 64-Bit, Version 21H2, Build 19044
It is well to open one's mind but only as a preliminary to closing it . . .
for the supreme act of judgment and selection.
~ Irving Babbitt
|
|
Problem with placeMarkers add-on in MS Word in Windows 10

Marco Oros
Hello.
I have problem in Microsoft Word.
I can't use Bookmarks addon in this software. I don't know, why. Previously, It was possible and I have also some bookmarks here in My document for looking on It and edit something.
Please, could You look on It and edit It? It is Word 2016.
Marco
|
|
Re: Explainer: accessible drag and drop notifications from the perspective of accessibility API's
Brian's Mail list account
I have to say I have never really managed to get drag and drop to be understandable. In the main along with many other windows things, there is often more than one way to do stuff, and cut and past often can be done instead and be a damn site less complicated. Of course I can see that for a sighted person drag and drop is intuitive and obvious, but its harder with a screenreader as you do not have the overview that you do with a working pair of eyes. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field.
toggle quoted message
Show quoted text
----- Original Message ----- From: "Joseph Lee" <joseph.lee22590@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 8:41 PM Subject: [nvda] Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all,
I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I'm going to write for a while (in the "Explainer" series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a "celebratory week" and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago).
In April 2022, a user asked the question, "how can I drag and drop items?" Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below.
Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called "desktop". On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called "foreground window"). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA's object navigation principles - when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level.
A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera - you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA's object navigation principles many times, so I'll skip it for now).
Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI's use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts.
One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called "grabbed") by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a "drop target"). For applications of this concept, see April 2022 archive of NVDA users list.
Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API's that an item is grabbed. Second, accessibility API's inform anyone listening that a drag and drop operation is in progress (this differs between API's as explained below). Third, while dragging and dropping, accessibility API's may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API's to inform users that drag and drop is complete.
Because at least two accessibility API's are in use by Windows based assistive technologies, the above process differs between API's:
* Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress. * UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets "is grabbed" UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won't go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA's GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the "is grabbed" state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys).
Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I'm saying this tip in a follow-up post).
This raises two questions:
1. Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that's what I think is the best way to explain it). 2. How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API's provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not. until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop.
Cheers,
Joseph
|
|
Re: NVDA is not reading/working in tables in MS word properly
Brian's Mail list account
Are you saying that if you have fast start set up, then its meagrely a bit like a sleep mode where all the running information is on the drive and used next boot up? One might have thought that Windows should call it something else then, since its not obvious to a lot of I know that shut down and restart is not a complete system reboot. Brian
-- bglists@... Sent via blueyonder.(Virgin media) Please address personal E-mail to:- briang1@..., putting 'Brian Gaff' in the display name field.
toggle quoted message
Show quoted text
----- Original Message ----- From: "Brian Vogel" <britechguy@...> To: <nvda@nvda.groups.io> Sent: Saturday, September 03, 2022 5:44 PM Subject: Re: [nvda] NVDA is not reading/working in tables in MS word properly On Sat, Sep 3, 2022 at 12:39 PM, mr Krit Kumar kedia wrote: Can my problem be resolved with the help of an easy table navigator addon?
- Perhaps, but have you followed the previously offered, The Most Basic Troubleshooting Steps for Suspected NVDA Issues? ( https://nvda.groups.io/g/nvda/message/81494 ) ? The only way to figure out what might be wrong is by systematic diagnostic steps to eliminate possibilities, and the ones in that message are the first things to try in order to try to rule in or rule out certain things. And if you have Fast Startup enabled on your computer then do a Windows Restart from the power menu, not a shutdown. -- Brian - Windows 10, 64-Bit, Version 21H2, Build 19044 It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection. ~ Irving Babbitt
|
|
NVDA Speaks Punctuation Marks in English When I'm Using a Hindi Voice
Hey everyone!
I’ve been facing this issue for a while now.
When I’m using a Hindi voice to read an article or any text in
Hindi, only the punctuation marks are spoken in English. And even
if I turn off the speaking of symbols completely, NVDA still
speaks all the punctuation marks in English while I’m reading
something in Hindi.
I have seen this problem occur in Windows One Core voices, SAPI5,
and sometimes eSpeak NG synthesizers.
If anyone wants to reproduce the issue mentioned above, switch to
a Hindi voice, and read the below snippet of an article from
Wikipedia Hindi
हरिशंकर परसाई (२२ अगस्त, १९२४ - १० अगस्त,
१९९५) हिंदी के प्रसिद्ध लेखक और व्यंगकार थे। उनका जन्म जमानी,
होशंगाबाद, मध्य प्रदेश में हुआ था। वे हिंदी के पहले रचनाकार हैं
जिन्होंने व्यंग्य को विधा का दर्जा दिलाया और उसे हल्के–फुल्के
मनोरंजन की परंपरागत परिधि से उबारकर समाज के व्यापक प्रश्नों से
जोड़ा। उनकी व्यंग्य रचनाएँ हमारे मन में गुदगुदी ही पैदा नहीं
करतीं बल्कि हमें उन सामाजिक वास्तविकताओं के आमने–सामने खड़ा करती
है, जिनसे किसी भी और राजनैतिक व्यवस्था में पिसते मध्यमवर्गीय मन
की सच्चाइयों को उन्होंने बहुत ही निकटता से पकड़ा है। सामाजिक
पाखंड और रूढ़िवादी जीवन–मूल्यों के अलावा जीवन पर्यन्त विस्ल्लीयो
पर भी अपनी अलग कोटिवार पहचान है। उड़ाते हुए उन्होंने सदैव विवेक
और विज्ञान–सम्मत दृष्टि को सकारात्मक रूप में प्रस्तुत किया है।
उनकी भाषा–शैली में खास किस्म का अपनापन महसूस होता है कि लेखक
उसके सामने ही बैठे हें।ठिठुरता हुआ गणतंत्र की रचना हरिशंकर परसाई
ने किया जो एक व्यंग है|
article
link
Does anyone know what might be the issue here, and the ways to
fix it?
I’m using Windows 11 21H2 (x64) build 22000.856
and NVDA Version 2022.3beta4
Also, on a related note (kinda), is it possible to create voice
profiles?
a voice profile with specific speech rate and pitch. This would
be very helpful for people who use Automatic language switching.
Thank you
--
|
|
Explainer: accessible drag and drop notifications from the perspective of accessibility API's
=Hi all, I understand that this is quite a strange (or more of a geeky) subject line, but I believe that this is the one of those last opportunities where I can pass on whatever I know to the next group of power users and developers before I pass out from graduate school. While what I’m going to write for a while (in the “Explainer” series) is geared more towards NVDA developers and add-on authors, I think it would be helpful to post some notes to the users list so at least some folks can understand how a screen reader and related technologies work behind the scenes (there are other reasons, some related to academics, others related to ongoing screen reader development, some related to mentoring). The below post is a result of what I might only describe as a “celebratory week” and my wish to document most (if not everything) of what happened to bring a bug fix that took more than four years to get it right, as it will also talk about upcoming changes to NVDA (2022.4 to be exact), the subject being accessible drag and drop (discussed at lengths months ago). In April 2022, a user asked the question, “how can I drag and drop items?” Among many responses (see the group archive for details), the mouse-based drag and drop method was discussed, as well as commands provided by NVDA to manipulate the mouse in doing so. But two things were not discussed then )or if discussed, receive little attention_: how do controls expose drag and drop capability, and can screen readers inform users about drag and drop operation? These I will answer below. Before discussing accessible drag and drop, it is important to understand how graphical user interface (GUI) controls are laid out on screen, as well as visual and nonvisual interaction paradigms. At a high level, GUI elements are organized into a hierarchy (or a tree, if you will). At the top of the hierarchy is the shell, or sometimes called “desktop”. On top (or below) the shell element are top-level windows for an app that houses all the controls that an app will be showing you. Below each top-level window are smaller windows, housing elements that will be shown on the screen most of the time (this is called “foreground window”). Inside these smaller windows are another set of windows, inside are even smaller windows, and eventually, we end up with the actual control being interacted with (button, checkbox, edit field, web document, and so on). This is the foundation for NVDA’s object navigation principles – when you move between objects (NVDA+Numpad arrow keys/NVDA+Shift+arrow keys), you are effectively moving between windows at a higher or lower level. A useful analogy is street addresses. Suppose you order a computer from a retailer, and whoever is responsible for delivery (perhaps Amazon, drones, or whatever or whoever) needs to locate where you live. The street address you provide to the retailer will first include the country of residence, followed by regions (states, provinces, and what not), itself divided into smaller local regions (cities, counties, etc.), in turn moving down to the street grid, and finally, the street address. Another analogy, albeit visual at first, is focus of a camera – you can zoom in or out of a specific thing the camera is pointing at, and sometimes users move their camera to focus on a different thing. This is how GUI elements are internally organized: shell window housing top-level application windows, in turn housing smaller windows, toolbars, emu bar and what not, in turn leading users to a specific control (I and so many others have discussed why this is important in NVDA’s object navigation principles many times, so I’ll skip it for now). Another thing to keep in mind is visual and nonvisual interaction paradigms. Our eye (visual senses) is perhaps the first thing we use when looking and interacting with a person, a thing, a screen control, or an idea. As GUI’s use visual senses to help people perform tasks with computers (nowadays tablets and even augmented reality (AR) tools), a paradigm best suited for visual manipulation of screen controls is the dominant interaction paradigm. These include mice, touchscreens, digitizers, eye control, and our thoughts. One way to interact with and manipulate elements on screen is drag and drop. In short, the item to be dragged is first selected (technically called “grabbed”) by using whatever input tool we use (moving system focus using the keyboard, holding down the mouse, long press of touchscreens, focusing a bit longer on the item with our eyes, etc.), then it is dragged across the screen to the desired location (called a “drop target”). For applications of this concept, see April 2022 archive of NVDA users list. Even if we hold down the mouse and drag something, many things happen in the background, some of which are accessibility related if conditions are right. First, the app informs accessibility API’s that an item is grabbed. Second, accessibility API’s inform anyone listening that a drag and drop operation is in progress (this differs between API’s as explained below). Third, while dragging and dropping, accessibility API’s may inform users about where the item is going, sometimes helped by the host app as to where things are going (not all apps do this). Lastly, when a drop operation happens, the app informs folks that drag and drop is complete (or canceled in some cases), which in turn allows accessibility API’s to inform users that drag and drop is complete. Because at least two accessibility API’s are in use by Windows based assistive technologies, the above process differs between API’s: - Microsoft Active Accessibility (MSAA/IAccessible): an attribute change informs screen readers and other assistive technologies that a drag and drop operation is in progress.
- UI Automation (UIA): up to six events and six properties are defined just for accessible drag and drop. When drag starts, the item in question sets “is grabbed” UIA property to True, and this gets picked up by drag start UIA event and announced to screen readers. While dragging items, UIA uses two properties to communicate where the item is going, and in some cases, the control that supports dropping the grabbed item also raises a series of events and property changes to inform screen readers and other technologies about drag and drop progress. I won’t go into details here as this forum is meant for users (I can go into details if asked privately or on development-oriented forums).
A few days ago, a suggestion was posted on NVDA’s GitHub repository about letting the screen reader announce dragging (is grabbed) state, specifically when performing drag and drop from the keyboard. In order to do so, NVDA must recognize the “is grabbed” state, then announce the progress of this operation. The first part is now part of NVDA 2022.4 (alpha builds at the moment), and a follow-up pull request was posted to bring the second part (drag and drop progress announcement) to NVDA. For example, if you rearrange Start menu tiles in Windows 10 or pinned items in Windows 11, NVDA will be told that an item is being dragged (grabbed), and it can announce what happens when you drop the item somewhere. You can rearrange tiles in Start menu either from the mouse or the keyboard (Alt+Shift+arrow keys). Tip: if you are using a specific add-on, NVDA can announce the drag and drop state and progress (will explain why I’m saying this tip in a follow-up post). This raises two questions: - Why is it that NVDA will announce only part of the progress when I use the keyboard to drag and drop items? Unlike mouse-based drag and drop where you must hold the mouse down, keyboard commands will force the item to be dragged to somewhere in one sweep (may things happen but that’s what I think is the best way to explain it).
- How come I cannot hear drag and drop announcement when dragging things in some places? For three reasons: first, the item being dragged must support drag and drop operation and properties in a way that can be made accessible (sometimes, the drop target control must support accessible drag and drop operations properly). Second, not all accessibility API’s provide accessible drag and drop announcements and events. Third, even if accessible drag and drop is possible, assistive technologies such as screen readers must support it. The first reason is why NVDA (and for that matter, Narrator) may not announce progress of rearranging taskbar icons in Windows 11 if done from the keyboard. The second is the reason why accessible drag and drop was added in Windows 8 via UI Automation. The third reason is why Narrator can announce accessible drag and drop progress and events when NVDA could not… until now (or rather, until 2018, improved a year later).
At least I hope this gave you an overview of behind the scenes work required to support accessible drag and drop. Cheers, Joseph
|
|
Re: NVDA is not reading/working in tables in MS word properly
On Sat, Sep 3, 2022 at 12:39 PM, mr Krit Kumar kedia wrote:
Can my problem be resolved with the help of an easy table navigator addon?
- Perhaps, but have you followed the previously offered, The Most Basic Troubleshooting Steps for Suspected NVDA Issues?? The only way to figure out what might be wrong is by systematic diagnostic steps to eliminate possibilities, and the ones in that message are the first things to try in order to try to rule in or rule out certain things. And if you have Fast Startup enabled on your computer then do a Windows Restart from the power menu, not a shutdown. --
Brian - Windows 10, 64-Bit, Version 21H2, Build 19044
It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.
~ Irving Babbitt
|
|
Re: NVDA is not reading/working in tables in MS word properly
hi, previously, I have never used the easy table navigator addon because I have been facing this problem from just some past days. I was never required to use any addon or external help to read tables. I dont know, why is it causing problems on my end? Can my problem be resolved with the help of an easy table navigator addon? looking forward to your solution in this regard.
Krit Kedia
toggle quoted message
Show quoted text
Hi Krit, I wonder if your table has defined the first row as the heading row? This may be set in the definition of the table. Even though the row is intended to contain data as you say, the definition of the table might not be set correctly. All the best, Cearbhall m +353 (0)833323487 Ph: _353 (0)1-2864623 e: cearbhall.omeadhra@... Hello all, warm greetings from my side. a few days back, I updated NVDA and as far as I know I updated it again just 2 days back. In MS word 2021, when I am navigating a table of two rows and two columns it is saying, "table with two rows and two columns". whether I press tab, down, up, alt control with arrow keys anything my focus always lands on the first column of the second row. I am not able to go on the first row in any condition. Thus, I am not able to complete my school related work which requires me to use tables in any manner! I would be more than happy if I get a solution to this problem.
|
|
Re: NVDA is not reading/working in tables in MS word properly
Hi Krit, I wonder if your table has defined the first row as the heading row? This may be set in the definition of the table. Even though the row is intended to contain data as you say, the definition of the table might not be set correctly. All the best, Cearbhall m +353 (0)833323487 Ph: _353 (0)1-2864623 e: cearbhall.omeadhra@...
toggle quoted message
Show quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of mr Krit Kumar kedia Sent: Friday, September 2, 2022 7:20 PM To: nvda@nvda.groups.io Subject: [nvda] NVDA is not reading/working in tables in MS word properly Hello all, warm greetings from my side. a few days back, I updated NVDA and as far as I know I updated it again just 2 days back. In MS word 2021, when I am navigating a table of two rows and two columns it is saying, "table with two rows and two columns". whether I press tab, down, up, alt control with arrow keys anything my focus always lands on the first column of the second row. I am not able to go on the first row in any condition. Thus, I am not able to complete my school related work which requires me to use tables in any manner! I would be more than happy if I get a solution to this problem.
|
|
Re: Fast switching between two voices
Hi, What version of this add-on works with the latest NVDA 2022.2 works? The version, dual_voice-5.0.nvda-addon given in the below link is not compatible with this version of NVDA. Any link for an updated version would be helpful. Thanks, Ravi. V.S.Ravindran. Excuses leads to failure!””
toggle quoted message
Show quoted text
-----Original Message----- From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Aravind R Sent: Thursday, September 1, 2022 4:48 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Fast switching between two voices use duel voice NVDA addon. it can switch to different voicess based on the language detected. i am using it successfully for english and non-litin languages using david and espeak. On 01/09/2022, Paul Schreier via groups.io <PGSchreier@...> wrote: I would like to change between two different voices (Microsoft David / Microsoft Stefan) with just one button press / key combination. Is there an easy way to do this?
Background: I'm an |American living in Switzerland. On web pages, emails, etc I'm constantlyh bouncing back and froth from English to German.
It would be so nice to be able to switch between them quickly. Is it possible with dual configurations? or input gestures? I've been studying the manuals, but I'm not making any progress. Can anyone please give me som advice and point me in the right direction?
Or does this require an add-on? My programming experience started with FORTRAN and didn't get beyond Quick BASIC, so Python is a mystery. And obviously, being severely vision impaired makes things even more challenging.
Thanks, Paul in Zurich
-- -- -- nothing is difficult unless you make it appear so. r. aravind, manager Department of sales bank of baroda specialised mortgage store, Chennai. mobile no: +91 9940369593, email id : aravind_069@..., aravind.andhrabank@.... aravind.rajendran@....
|
|
Re: Issue With Extended Winamp Hot Key For Time Remaining
Hi, Control+Shift+T announces the total time, Control+Shift+R announces time remaining, but the total time key was accidentally disabled by the App creator and won't be reinstated until the next version, so if you want to see the total track time, begin playing the track in the usual way, then stop it, not pause, but stop, and use control+shift+r and since the time remaining on a track that is at zero is the same as the total time, you've got your total time.
Shawn Klein
toggle quoted message
Show quoted text
On 8/31/2022 12:49 PM, Chrissie wrote: If you set Winamp to show time remaining you can do control shift r and that will give it to you.
Chrissie cochrane Executive Director The Global Voice Radio for All http://theglobalvoice.info Twitter @chrissie_artist Kindness is a powerful force: use it generously! especially on yourself.
-----Original Message----- From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Ron Canazzi Sent: Wednesday, August 31, 2022 6:11 PM To: NVDA Official Group Subject: [nvda] Issue With Extended Winamp Hot Key For Time Remaining
Hi Group,
I am using the extended Winamp add on with the new Winamp 5.9.0 Build 9999 and the keystroke control + shift + T that is supposed to announce the time remaining in a track is silent. Can anyone else reproduce this issue?
-- Signature: For a nation to admit it has done grievous wrongs and will strive to correct them for the betterment of all is no vice; For a nation to claim it has always been great, needs no improvement and to cling to its past achievements is no virtue!
|
|
Re: NVDA is not reading/working in tables in MS word properly
Is this a new behavior? If so, had you previously been using the Easy Table Navigator add-on? It appears that the version on the Community Add-Ons site is not as recent as the one in the Spanish NVDA Add-On Catalog, so it's the latter I've given the link to. This add-on makes working with tables in MS-Word and in webpages much easier. Also, have you tried The Most Basic Troubleshooting Steps for Suspected NVDA Issues?
--
Brian - Windows 10, 64-Bit, Version 21H2, Build 19044
It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.
~ Irving Babbitt
|
|
NVDA is not reading/working in tables in MS word properly
Hello all, warm greetings from my side. a few days back, I updated NVDA and as far as I know I updated it again just 2 days back. In MS word 2021, when I am navigating a table of two rows and two columns it is saying, "table with two rows and two columns". whether I press tab, down, up, alt control with arrow keys anything my focus always lands on the first column of the second row. I am not able to go on the first row in any condition. Thus, I am not able to complete my school related work which requires me to use tables in any manner! I would be more than happy if I get a solution to this problem.
Best regards, Krit Kedia class: 8 India: +91
|
|