Re: Some mouse navigation questions

Gene
 

Your explanation reminds me of something I've wondered about.  If these questions aren't clear, let me know.  It appears that in some programs, clicking with a mouse works but enter doesn't.  On some web pages with some controls, that is true as well.  I know that on web pages, you can create links that only respond to mouse clicks.  Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons?  Also, why would anyone want to create a link or control that can only be mouse activated?
 
Gene

----- Original Message -----
From: Joseph Lee
Sent: Friday, February 15, 2019 11:51 PM
Subject: Re: [nvda] Some mouse navigation questions

Hi,

Ah, I think I see where this is going.

So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure.

How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route.

The things listed in Input Gestures dialog can be thought of as follows:

  • Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
  • Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
  • Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.

 

Hope this helps.

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Hi Joseph,
You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer.  Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person.
and that means using plain language where ever possible, even if it isn’t nice and elegant.g

Mary

 


On Feb 15, 2019, at 9:28 PM, Joseph Lee <joseph.lee22590@...> wrote:

Hi,

I think we should devote a separate thread for it, but to give you a short answer:

Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached.

In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine).

Cheers,

Joseph

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten
Sent: Friday, February 15, 2019 9:16 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Some mouse navigation questions

 

Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.



On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:

Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control?

On 2/15/2019 6:38 PM, Gene wrote:

The problem is, what should this array of ways of input be called?  Maybe input commands, which would cover everything.  This is just one more example of the decline of English.  Apps and applications, two different things, are used increasingly interchangeably.  the language in general is becoming less precise and accurate and this is just one area.

 

Gene

----- Original Message -----

From: Brian Vogel

Sent: Friday, February 15, 2019 6:15 PM

Subject: Re: [nvda] Some mouse navigation questions

 

On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:

Input gestures are more abstract

Which is precisely the problem.  Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go. 

The folks at NV Access are far from the only software developers to go this route.   Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place.
 
--

Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763  

A great deal of intelligence can be invested in ignorance when the need for illusion is deep.

          ~ Saul Bellow, To Jerusalem and Back

 

 

Join nvda@nvda.groups.io to automatically receive all group messages.