Date   

Re: Competition was Re: [nvda] Is NVDA really dying? Can I do anything to help?

Brian's Mail list account <bglists@...>
 

Well I remember supernova had an ignore changes in an area thing. Problem is of curse that no two sites put that in the same place.
Another annoyance is sites that immediately start playing loud music when you go to them.
The thing is that many sighted people no longer want the clever stuff. That is for presentation web sites not commerce or those in everyday usage, yet the people writing them seem to apply their skills to creating monsters rather than usable web sites.
Brian

bglists@...
Sent via blueyonder.
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Travis Siegel" <tsiegel@...>
To: <nvda@nvda.groups.io>
Sent: Friday, February 15, 2019 2:39 PM
Subject: Re: Competition was Re: [nvda] Is NVDA really dying? Can I do anything to help?


Heh, you mention accessible web sites, and scrolling text and the like. I discovered way back in 1996, when I setup my web site for the first time, that folks aren't interested in plain text if there's a pretty scrolling banner nearby. I had some javascript on my site scrolling advertising offers, (yes, even in 1996, pay per click was a thing), but I took it down after trying to show the site to a library resulted in the librarian mentioning how cool the scrolling text was, and not even glancing at the rest of the site. That was my one and only experience with javascript on my own pages, I refuse to use it now, and have since that day. Call me what you like, but it's always been my opinion that the purpose of a web site is to convey information, and anything that gets in the way of conveying that information is useless.
NVDA can handle most things, but until we have a way for it to ignore scrolling banners, I fear our web browsing sessions are doomed to repeating text that has no bearing on the current page.


Re: Fan Key Issues

Brian K. Lingard
 

Dear Bhavya Shah& List:
Check with your keyboard's makers, they will need the make and model number, possibly the serial number of your keyboard. They may provide options to change the way it works. However, if it has no enter or return key on the keyboard, you may be stuck with its way of entering frequently-used keystrokes.
Brian K. Lingard VE3YI, Ab2JI, B.. A., C. T. M.

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Bhavya shah
Sent: February 11, 2019 1:37 PM
To: nvda <nvda@nvda.groups.io>
Cc: bhavya.shah125@...
Subject: [nvda] Fn unction Key Issues

Dear all,

I apologize for this not being strictly NVDA related, but more in regards to adapting to a keyboard which poses unique challenges in the issuance of NVDA key commands. I recently purchased an external keyboard which was overall great value for money, apart from one pestiferous problem.

To emulate the Home, End, Page Up and Page Down functions, I need to press the Function (Fn) key in conjunction with the arrow keys. This is not atypical for me as I need to do this even on my built in laptop keyboard. However, on my laptop, I am absolutely used to pressing Ctrl followed by Function followed by right arrow for Control and Home, whereas on this new external keyboard, I need to hold down Function first, and then press keys like Ctrl and right arrow. This is getting particularly problematic since my brain is wired to use the Function key preceding an arrow key only when I want to select an entire line or page as opposed to holding down Function first and foremost, and then pressing things like NVDA modifier and Shift and right arrow to read the status bar.

Is there any way - software setting or device setup or otherwise - by which I can get this external keyboard to act similar to my laptop keyboard, in that I need to press the Function key only before the arrow keys or the key whose secondary function will be induced as opposed to the first key in any multi-key combination?

I would greatly appreciate any assistance in this regard.

Thanks.

--
Best Regards
Bhavya Shah

Blogger at Hiking Across Horizons: https://bhavyashah125.wordpress.com/

Contacting Me
E-mail Address: bhavya.shah125@...
LinkedIn: https://www.linkedin.com/in/bhavyashah125/
Twitter: @BhavyaShah125
Skype: bhavya.09


Can we pass graphics while navigating web pages using NVDA?

 

To all members
Does NVDA has feature to ignore graphic while navigating web page, and if it has, now to enable it?
A friend asked me this, and he shared him thought that sometimes he doesn't want to see graphics in article he reading, and he wonder if we could add this to NVDA if it doesn't exist. I can also create a tiket for this if noone did before.
Any help would be appreciated.
Cuong
----------------
Dang Manh Cuong
 The Assistive technology specialist
 Sao Mai Vocational and assistive center for the blind
52/22 Huynh Thien Loc St., Hoa Thanh ward, Tan Phu dist., HCM, Vietnam.
 Tel: +8428 7302-4488
 E-mail: info@...; tech@...
 Facebook: https://www.facebook.com/saomaicenterfortheblind
 Website: http://www.trungtamsaomai.org; http://www.saomaicenter.org
Mobile / Viber / WhatsApp / Zalo: +84 902-572-300
 E-mail: dangmanhcuong@...; cuong@...
 Skype name: dangmanhcuong
 facebook: http://facebook.com/dangmanhcuong
 Twitter: @ManhCuongTech
NVDA Certified Expert: https://certification.nvaccess.org/


Re: Using Office Products with UIA

Ralf Kefferpuetz
 

Hello,

In the latest alpha of NVDA you can turn on UIA support for Word in NVDA settings under advanced.

 

Cheers,

  Ralf

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Noah Carver via Groups.Io
Sent: Samstag, 16. Februar 2019 04:37
To: nvda@nvda.groups.io
Subject: [nvda] Using Office Products with UIA

 

Hi All,

 

Heard some folks talking about major performance improvements provided by using UIA in Microsoft Office. Does anyone know how to enable UIA for these applications?

 

Thanks,

 

Noah

 

Sent from Mail for Windows 10

 


Re: Using Office Products with UIA

Daniel Wolak
 

Hi,

I did some digging, as am interested in testing this myself.

In issue #7409, it is advised to do the following:


You will need to:
1. Open nvda.ini (from your user settings directory) with a text editor.
1. Use the run dialog and type "%appdata%/nvda/"
2. Look for UIA or useInMSWordWhenAvailable
1. If there is already a UIA section just add the line useInMSWordWhenAvailable = True indented with a tab
2. If both are present, ensure that it is set to True not False
3. If neither are available then add the following to the bottom of the file:
[UIA]
    useInMSWordWhenAvailable = True
3. After you finish testing, you should delete this entry or restore from your backup.


Note that things may have changed since this was written down, so if anyone else has alternative/more up to date info, then do let us know.

HTH,


Daniel


On 16/02/2019 03:36, Noah Carver via Groups.Io wrote:

Hi All,

 

Heard some folks talking about major performance improvements provided by using UIA in Microsoft Office. Does anyone know how to enable UIA for these applications?

 

Thanks,

 

Noah

 

Sent from Mail for Windows 10

 


CLARITY OF TERMINOLOGY AND DOCUMENTATION

Richard Bartholomew
 

Hi,

The underlying explanation of what input gestures are is excellent and
understandable; however, for me, the issue isn't semantics per se but if the
top-level description isn't immediately obvious to the end-user, it has
failed in some way. In this case, the word gesture implies touch screens
and, so, discouraged me from finding the time to delve into an area which I
thought wasn't relevant to me. A personal failing I admit but we all have
demands upon our time so if we can weed out what we think are unnecessary
diversions, it's often the pragmatic way to go!

I accept that this whole area is a minefield as you can please some of the
people, some of the time, etc, etc, etc!

Good luck!

Richard Bartholomew


Re: Some mouse navigation questions

 

Hi,

In short, I acknowledged how wrong I was in my way of explaining about input gestures the first time.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of marcio via Groups.Io
Sent: Saturday, February 16, 2019 12:30 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Guys I know I'm gonna be too stupid asking something like that and that hasn't anything to do with the matter now.

However, please, what it means "to get an F" in Joseph Lee's message?
Just gave it a google and found nothing, so...

Cheers,
Marcio
Follow me on Twitter

Em 16/02/2019 04:41, Shaun Everiss escreveu:

You know, explaining what we do depends on what we know.

As a computer geek I find it natural to go flat tack and explain things in technical terms, sometimes I forget to translate back to normal understandable language and find it hard to do so at times.

 

 

On 16/02/2019 6:51 PM, Joseph Lee wrote:

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 

 


Re: Some mouse navigation questions

 

Guys I know I'm gonna be too stupid asking something like that and that hasn't anything to do with the matter now.
However, please, what it means "to get an F" in Joseph Lee's message?
Just gave it a google and found nothing, so...

Cheers,
Marcio
Follow me on Twitter

Em 16/02/2019 04:41, Shaun Everiss escreveu:

You know, explaining what we do depends on what we know.

As a computer geek I find it natural to go flat tack and explain things in technical terms, sometimes I forget to translate back to normal understandable language and find it hard to do so at times.



On 16/02/2019 6:51 PM, Joseph Lee wrote:

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

From: Brian Vogel

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 



Re: Some mouse navigation questions

 

Hi,

The post I wrote below reminds me of a common problem I encounter as new Windows 10 feature is released: unlabeled controls. I spent countless hours debugging and correcting this problem, even talking to Microsoft people in charge of features with accessibility issues. But I’ll save tales from that adventure for another thread.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Joseph Lee via Groups.Io
Sent: Friday, February 15, 2019 11:29 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi,

Not all controls are obligated to take in input from all forms. This mostly has to do with the design of the app or a site in question, or the operating system isn’t understanding what NVDA wants it to do. This can be remedied by changing certain internal parts of a control, and sometimes, we know the effort that must be spent on persuading developers to take inclusive design seriously.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Friday, February 15, 2019 11:14 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Your explanation reminds me of something I've wondered about.  If these questions aren't clear, let me know.  It appears that in some programs, clicking with a mouse works but enter doesn't.  On some web pages with some controls, that is true as well.  I know that on web pages, you can create links that only respond to mouse clicks.  Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons?  Also, why would anyone want to create a link or control that can only be mouse activated?

 

Gene

----- Original Message -----

From: Joseph Lee

Sent: Friday, February 15, 2019 11:51 PM

Subject: Re: [nvda] Some mouse navigation questions

 

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

 

Hi,

Not all controls are obligated to take in input from all forms. This mostly has to do with the design of the app or a site in question, or the operating system isn’t understanding what NVDA wants it to do. This can be remedied by changing certain internal parts of a control, and sometimes, we know the effort that must be spent on persuading developers to take inclusive design seriously.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Friday, February 15, 2019 11:14 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Your explanation reminds me of something I've wondered about.  If these questions aren't clear, let me know.  It appears that in some programs, clicking with a mouse works but enter doesn't.  On some web pages with some controls, that is true as well.  I know that on web pages, you can create links that only respond to mouse clicks.  Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons?  Also, why would anyone want to create a link or control that can only be mouse activated?

 

Gene

----- Original Message -----

From: Joseph Lee

Sent: Friday, February 15, 2019 11:51 PM

Subject: Re: [nvda] Some mouse navigation questions

 

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

Chris Shook
 

Richard Wells, could you contact me privately? I have a question for you.


Re: Some mouse navigation questions

Gene
 

Your explanation reminds me of something I've wondered about.  If these questions aren't clear, let me know.  It appears that in some programs, clicking with a mouse works but enter doesn't.  On some web pages with some controls, that is true as well.  I know that on web pages, you can create links that only respond to mouse clicks.  Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons?  Also, why would anyone want to create a link or control that can only be mouse activated?
 
Gene

----- Original Message -----
From: Joseph Lee
Sent: Friday, February 15, 2019 11:51 PM
Subject: Re: [nvda] Some mouse navigation questions

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

From: Brian Vogel

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

 

You know, explaining what we do depends on what we know.

As a computer geek I find it natural to go flat tack and explain things in technical terms, sometimes I forget to translate back to normal understandable language and find it hard to do so at times.



On 16/02/2019 6:51 PM, Joseph Lee wrote:

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

From: Brian Vogel

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

 

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

 

Hi,

More importantly, let me know if the below description is digestible (I’d be happy to give you a detailed description of how it works internally, provided that I have enough strength to do it before sleep overtakes me).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Joseph Lee via Groups.Io
Sent: Friday, February 15, 2019 9:28 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

Mary Otten
 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g
Mary



On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

 

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.


On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

Mary Otten
 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:
The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.
 
Gene
----- Original Message -----
Sent: Friday, February 15, 2019 6:15 PM
Subject: Re: [nvda] Some mouse navigation questions

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:
Input gestures are more abstract
Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Re: Some mouse navigation questions

Richard Wells
 

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:
The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.
 
Gene
----- Original Message -----
Sent: Friday, February 15, 2019 6:15 PM
Subject: Re: [nvda] Some mouse navigation questions

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:
Input gestures are more abstract
Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 


Using Office Products with UIA

 

Hi All,

 

Heard some folks talking about major performance improvements provided by using UIA in Microsoft Office. Does anyone know how to enable UIA for these applications?

 

Thanks,

 

Noah

 

Sent from Mail for Windows 10