Re: Some mouse navigation questions
Hi, The post I wrote below reminds me of a common problem I encounter as new Windows 10 feature is released: unlabeled controls. I spent countless hours debugging and correcting this problem, even talking to Microsoft people in charge of features with accessibility issues. But I’ll save tales from that adventure for another thread. Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Joseph Lee via Groups.Io Sent: Friday, February 15, 2019 11:29 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Hi, Not all controls are obligated to take in input from all forms. This mostly has to do with the design of the app or a site in question, or the operating system isn’t understanding what NVDA wants it to do. This can be remedied by changing certain internal parts of a control, and sometimes, we know the effort that must be spent on persuading developers to take inclusive design seriously. Cheers, Joseph Your explanation reminds me of something I've wondered about. If these questions aren't clear, let me know. It appears that in some programs, clicking with a mouse works but enter doesn't. On some web pages with some controls, that is true as well. I know that on web pages, you can create links that only respond to mouse clicks. Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons? Also, why would anyone want to create a link or control that can only be mouse activated? ----- Original Message ----- Sent: Friday, February 15, 2019 11:51 PM Subject: Re: [nvda] Some mouse navigation questions Hi, Ah, I think I see where this is going. So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure. How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route. The things listed in Input Gestures dialog can be thought of as follows: - Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
- Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
- Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.
Hope this helps. Cheers, Joseph Hi Joseph, You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer. Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person. and that means using plain language where ever possible, even if it isn’t nice and elegant.g Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense. On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi, Not all controls are obligated to take in input from all forms. This mostly has to do with the design of the app or a site in question, or the operating system isn’t understanding what NVDA wants it to do. This can be remedied by changing certain internal parts of a control, and sometimes, we know the effort that must be spent on persuading developers to take inclusive design seriously. Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene Sent: Friday, February 15, 2019 11:14 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Your explanation reminds me of something I've wondered about. If these questions aren't clear, let me know. It appears that in some programs, clicking with a mouse works but enter doesn't. On some web pages with some controls, that is true as well. I know that on web pages, you can create links that only respond to mouse clicks. Is that the case in programs or are the icons just somehow not activated by enter, not by code the designer intentionally uses to limit the icon, but for other reasons? Also, why would anyone want to create a link or control that can only be mouse activated? ----- Original Message ----- Sent: Friday, February 15, 2019 11:51 PM Subject: Re: [nvda] Some mouse navigation questions Hi, Ah, I think I see where this is going. So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure. How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route. The things listed in Input Gestures dialog can be thought of as follows: - Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
- Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
- Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.
Hope this helps. Cheers, Joseph Hi Joseph, You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer. Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person. and that means using plain language where ever possible, even if it isn’t nice and elegant.g Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.
On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Richard Wells, could you contact me privately? I have a question for you.
|
|
Re: Some mouse navigation questions
Your explanation reminds me of something I've
wondered about. If these questions aren't clear, let me know. It
appears that in some programs, clicking with a mouse works but enter
doesn't. On some web pages with some controls, that is true as well.
I know that on web pages, you can create links that only respond to mouse
clicks. Is that the case in programs or are the icons just somehow not
activated by enter, not by code the designer intentionally uses to limit the
icon, but for other reasons? Also, why would anyone want to create a link
or control that can only be mouse activated?
Gene
toggle quoted messageShow quoted text
----- Original Message -----
Sent: Friday, February 15, 2019 11:51 PM
Subject: Re: [nvda] Some mouse navigation
questions
Hi,
Ah, I think I see where this is going.
So if I’m getting this right, I got an F in explaining the
whole thing. This is good news, as it is a validation of a long-standing issue I
had in regards to NVDA’s own documentation set: needs major overhaul (one of the
reasons for creating my audio tutorials in the first place), and the approach we
as developers take to explain how things work isn’t working. As a person who is
serious about documentation, I take it as a personal failure.
How about this analogy: think of gestures as roads you take
to arrive at a certain location. Suppose you wish to go from point A to point B.
You can either walk, drive, or fly. It doesn’t matter how you do it as long as
you get to your destination. In the same way, when doing a command, it doesn’t
matter how you do it – either from the keyboard, a touch gesture, and what not,
as long as you get something from NVDA. Adding, removing, or reassigning
gestures (or commands) can be akin to adding new roads, getting around an
obstruction, or closing off the airspace around the route.
The things listed in Input Gestures dialog can be thought of
as follows:
- Categories: all sorts of things you can do with NVDA,
categorized into different types of tasks.
- Command descriptions: what NVDA can do, or in case of
the analogy above, your destination.
- Gesture (or command) itself: ways of performing that
command, or using the analogy above, modes of travel.
Hope this helps.
Cheers,
Joseph
From: nvda@nvda.groups.io <nvda@nvda.groups.io>
On Behalf Of Mary Otten Sent: Friday, February 15, 2019 9:34
PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse
navigation questions
Hi Joseph, You are probably right that this should be a
separate thread. However, I just want to point out that your whole explanation
about JawsScripps etc. is he relevant to the average user who does not really
care about programming, scripting, etc. The average user wants to use the screen
reader and is not interested in all the stuff that you talked about in your last
message. So when you design an interface, you need to have the average user in
mind. That is the person who wants to do a task with the computer, not the geek,
not the techie, and not the programmer. Do you honestly think that the
hundreds of millions of computer users throughout the world, that is the cited
users, would be using computers if they had to deal with this crap? The answer
is no. That’s why they invented the graphic user interface. It’s easy for
sighted people. What blind programmers and others who want to make screen
readers need to do is make the screen reader interface as friendly as possible
for the average non-techie person. and that means using plain language where
ever possible, even if it isn’t nice and elegant.g
Hi,
I think we should devote a separate thread for it, but to
give you a short answer:
Those of you coming from JAWS scripting world might be
familiar with the terms “script” and “function”. They are essentially the
same: both perform something which can be called upon from other places. The
crucial difference is how it is invoked: a script is a function with a piece
of input attached.
In the same way, NVDA code can define functions (they are
really Python functions). Just like JAWS scripts, the one difference between a
function and a script is how you invoke it: you need a piece of input to
invoke a script (basically a specially tagged function), which can call other
functions, run other scripts, and even kill NVDA (if you want, but don’t try
that at home). As long as any kind of command is assigned to a script
(keyboard command, a touchscreen gesture, a braille display hardware button,
etc.), NVDA will let you perform something. This is why you can assign touch
commands to keyboard commands and vice versa, because NVDA do let you assign
(technically called “binding”) all sorts of input mechanism for a command (for
instance, just as you can use keyboard to perform object navigation routines,
a set of touch swipes has been defined to perform object navigation; in fact,
these commands call the same routine).
Cheers,
Joseph
Good idea. There is probably
some programming thing that gets in the way. I hope not though, because it
makes very much sense.
On Feb 15, 2019, at 9:14
PM, Richard Wells <richwels@...>
wrote:
Why couldn't they be in different preference categories? Braille for
Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control
for Voice control?
On 2/15/2019 6:38 PM, Gene wrote:
The problem is,
what should this array of ways of input be called? Maybe input
commands, which would cover everything. This is just one more
example of the decline of English. Apps and applications, two
different things, are used increasingly interchangeably. the
language in general is becoming less precise and accurate and this is just
one area.
----- Original
Message -----
Sent: Friday,
February 15, 2019 6:15 PM
Subject: Re: [nvda] Some
mouse navigation questions
On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee
wrote:
Input gestures are more
abstract
Which is precisely the problem. Callin something
that is intimately familiar to the typical end user, and when it's
currently the only method (regardless of keyboard being used), something
"more abstract" is not the way to go.
The folks at NV Access
are far from the only software developers to go this route.
Almost every time it's the route taken it makes things more opaque
to the target demographic, which is why it should be avoided in the first
place. --
Brian - Windows
10 Home, 64-Bit, Version
1809, Build 17763
A
great deal of intelligence can be invested in ignorance when the need for
illusion is deep.
~ Saul
Bellow, To Jerusalem
and Back
|
|
Re: Some mouse navigation questions
You know, explaining what we do depends on what we know.
As a computer geek I find it natural to go flat tack and explain
things in technical terms, sometimes I forget to translate back to
normal understandable language and find it hard to do so at times.
toggle quoted messageShow quoted text
On 16/02/2019 6:51 PM, Joseph Lee
wrote:
Hi,
Ah, I think I see where this is going.
So if I’m getting this right, I got an F in
explaining the whole thing. This is good news, as it is a
validation of a long-standing issue I had in regards to NVDA’s
own documentation set: needs major overhaul (one of the
reasons for creating my audio tutorials in the first place),
and the approach we as developers take to explain how things
work isn’t working. As a person who is serious about
documentation, I take it as a personal failure.
How about this analogy: think of gestures
as roads you take to arrive at a certain location. Suppose you
wish to go from point A to point B. You can either walk,
drive, or fly. It doesn’t matter how you do it as long as you
get to your destination. In the same way, when doing a
command, it doesn’t matter how you do it – either from the
keyboard, a touch gesture, and what not, as long as you get
something from NVDA. Adding, removing, or reassigning gestures
(or commands) can be akin to adding new roads, getting around
an obstruction, or closing off the airspace around the route.
The things listed in Input Gestures dialog
can be thought of as follows:
- Categories:
all sorts of things you can do with NVDA, categorized into
different types of tasks.
- Command
descriptions: what NVDA can do, or in case of the analogy
above, your destination.
- Gesture (or
command) itself: ways of performing that command, or using
the analogy above, modes of travel.
Hope this helps.
Cheers,
Joseph
Hi Joseph,
You are probably right that this should be a separate
thread. However, I just want to point out that your whole
explanation about JawsScripps etc. is he relevant to the
average user who does not really care about programming,
scripting, etc. The average user wants to use the screen
reader and is not interested in all the stuff that you
talked about in your last message. So when you design an
interface, you need to have the average user in mind. That
is the person who wants to do a task with the computer, not
the geek, not the techie, and not the programmer. Do you
honestly think that the hundreds of millions of computer
users throughout the world, that is the cited users, would
be using computers if they had to deal with this crap? The
answer is no. That’s why they invented the graphic user
interface. It’s easy for sighted people. What blind
programmers and others who want to make screen readers need
to do is make the screen reader interface as friendly as
possible for the average non-techie person.
and that means using plain language where ever possible,
even if it isn’t nice and elegant.g
Hi,
I think we should devote a separate
thread for it, but to give you a short answer:
Those of you coming from JAWS
scripting world might be familiar with the terms
“script” and “function”. They are essentially the same:
both perform something which can be called upon from
other places. The crucial difference is how it is
invoked: a script is a function with a piece of input
attached.
In the same way, NVDA code can define
functions (they are really Python functions). Just like
JAWS scripts, the one difference between a function and
a script is how you invoke it: you need a piece of input
to invoke a script (basically a specially tagged
function), which can call other functions, run other
scripts, and even kill NVDA (if you want, but don’t try
that at home). As long as any kind of command is
assigned to a script (keyboard command, a touchscreen
gesture, a braille display hardware button, etc.), NVDA
will let you perform something. This is why you can
assign touch commands to keyboard commands and vice
versa, because NVDA do let you assign (technically
called “binding”) all sorts of input mechanism for a
command (for instance, just as you can use keyboard to
perform object navigation routines, a set of touch
swipes has been defined to perform object navigation; in
fact, these commands call the same routine).
Cheers,
Joseph
Good
idea. There is probably some programming thing that
gets in the way. I hope not though, because it makes
very much sense.
On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...>
wrote:
Why couldn't they be in different preference
categories? Braille for Braille, Keyboard for
Keyboard, Gestures for Touch screens and Voice
control for Voice control?
On 2/15/2019 6:38 PM, Gene
wrote:
The
problem is, what should this array of ways
of input be called? Maybe input commands,
which would cover everything. This is just
one more example of the decline of English.
Apps and applications, two different things,
are used increasingly interchangeably. the
language in general is becoming less precise
and accurate and this is just one area.
-----
Original Message -----
Sent:
Friday, February 15, 2019 6:15 PM
Subject: Re:
[nvda] Some mouse navigation questions
On Fri, Feb 15, 2019 at 07:06
PM, Joseph Lee wrote:
Input gestures are more
abstract
Which is precisely the
problem. Callin something that is intimately
familiar to the typical end user, and when it's
currently the only method (regardless of
keyboard being used), something "more abstract"
is not the way to go.
The folks at NV Access are far from the only
software developers to go this route. Almost
every time it's the route taken it makes things
more opaque to the target demographic, which is
why it should be avoided in the first place.
--
Brian - Windows 10 Home,
64-Bit, Version 1809, Build
17763
A great deal of
intelligence can be invested in ignorance
when the need for illusion is deep.
~ Saul
Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi, Ah, I think I see where this is going. So if I’m getting this right, I got an F in explaining the whole thing. This is good news, as it is a validation of a long-standing issue I had in regards to NVDA’s own documentation set: needs major overhaul (one of the reasons for creating my audio tutorials in the first place), and the approach we as developers take to explain how things work isn’t working. As a person who is serious about documentation, I take it as a personal failure. How about this analogy: think of gestures as roads you take to arrive at a certain location. Suppose you wish to go from point A to point B. You can either walk, drive, or fly. It doesn’t matter how you do it as long as you get to your destination. In the same way, when doing a command, it doesn’t matter how you do it – either from the keyboard, a touch gesture, and what not, as long as you get something from NVDA. Adding, removing, or reassigning gestures (or commands) can be akin to adding new roads, getting around an obstruction, or closing off the airspace around the route. The things listed in Input Gestures dialog can be thought of as follows: - Categories: all sorts of things you can do with NVDA, categorized into different types of tasks.
- Command descriptions: what NVDA can do, or in case of the analogy above, your destination.
- Gesture (or command) itself: ways of performing that command, or using the analogy above, modes of travel.
Hope this helps. Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten Sent: Friday, February 15, 2019 9:34 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Hi Joseph, You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer. Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person. and that means using plain language where ever possible, even if it isn’t nice and elegant.g Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.
On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi, More importantly, let me know if the below description is digestible (I’d be happy to give you a detailed description of how it works internally, provided that I have enough strength to do it before sleep overtakes me). Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Joseph Lee via Groups.Io Sent: Friday, February 15, 2019 9:28 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense. On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi Joseph, You are probably right that this should be a separate thread. However, I just want to point out that your whole explanation about JawsScripps etc. is he relevant to the average user who does not really care about programming, scripting, etc. The average user wants to use the screen reader and is not interested in all the stuff that you talked about in your last message. So when you design an interface, you need to have the average user in mind. That is the person who wants to do a task with the computer, not the geek, not the techie, and not the programmer. Do you honestly think that the hundreds of millions of computer users throughout the world, that is the cited users, would be using computers if they had to deal with this crap? The answer is no. That’s why they invented the graphic user interface. It’s easy for sighted people. What blind programmers and others who want to make screen readers need to do is make the screen reader interface as friendly as possible for the average non-techie person. and that means using plain language where ever possible, even if it isn’t nice and elegant.g Mary
toggle quoted messageShow quoted text
On Feb 15, 2019, at 9:28 PM, Joseph Lee < joseph.lee22590@...> wrote: Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.
On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi, I think we should devote a separate thread for it, but to give you a short answer: Those of you coming from JAWS scripting world might be familiar with the terms “script” and “function”. They are essentially the same: both perform something which can be called upon from other places. The crucial difference is how it is invoked: a script is a function with a piece of input attached. In the same way, NVDA code can define functions (they are really Python functions). Just like JAWS scripts, the one difference between a function and a script is how you invoke it: you need a piece of input to invoke a script (basically a specially tagged function), which can call other functions, run other scripts, and even kill NVDA (if you want, but don’t try that at home). As long as any kind of command is assigned to a script (keyboard command, a touchscreen gesture, a braille display hardware button, etc.), NVDA will let you perform something. This is why you can assign touch commands to keyboard commands and vice versa, because NVDA do let you assign (technically called “binding”) all sorts of input mechanism for a command (for instance, just as you can use keyboard to perform object navigation routines, a set of touch swipes has been defined to perform object navigation; in fact, these commands call the same routine). Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten Sent: Friday, February 15, 2019 9:16 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.
On Feb 15, 2019, at 9:14 PM, Richard Wells <richwels@...> wrote:
Why couldn't they be in different preference categories? Braille for Braille, Keyboard for Keyboard, Gestures for Touch screens and Voice control for Voice control? On 2/15/2019 6:38 PM, Gene wrote: The problem is, what should this array of ways of input be called? Maybe input commands, which would cover everything. This is just one more example of the decline of English. Apps and applications, two different things, are used increasingly interchangeably. the language in general is becoming less precise and accurate and this is just one area. ----- Original Message ----- Sent: Friday, February 15, 2019 6:15 PM Subject: Re: [nvda] Some mouse navigation questions On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote: Input gestures are more abstract
Which is precisely the problem. Callin something that is intimately familiar to the typical end user, and when it's currently the only method (regardless of keyboard being used), something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers to go this route. Almost every time it's the route taken it makes things more opaque to the target demographic, which is why it should be avoided in the first place. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Good idea. There is probably some programming thing that gets in the way. I hope not though, because it makes very much sense.
toggle quoted messageShow quoted text
On Feb 15, 2019, at 9:14 PM, Richard Wells < richwels@...> wrote:
Why couldn't they be in different preference categories? Braille
for Braille, Keyboard for Keyboard, Gestures for Touch screens and
Voice control for Voice control?
On 2/15/2019 6:38 PM, Gene wrote:
The problem is, what should this
array of ways of input be called? Maybe input commands, which
would cover everything. This is just one more example of the
decline of English. Apps and applications, two different
things, are used increasingly interchangeably. the language
in general is becoming less precise and accurate and this is
just one area.
Gene
----- Original Message -----
Sent: Friday, February 15, 2019 6:15 PM
Subject: Re: [nvda] Some mouse navigation
questions
On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:
Input gestures are more abstract
Which is precisely the problem. Callin something that is
intimately familiar to the typical end user, and when it's
currently the only method (regardless of keyboard being used),
something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers
to go this route. Almost every time it's the route taken it
makes things more opaque to the target demographic, which is why
it should be avoided in the first place.
--
Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763
A great deal of
intelligence can be invested in ignorance when the need
for illusion is deep.
~ Saul Bellow, To Jerusalem and
Back
|
|
Re: Some mouse navigation questions
Why couldn't they be in different preference categories? Braille
for Braille, Keyboard for Keyboard, Gestures for Touch screens and
Voice control for Voice control?
toggle quoted messageShow quoted text
On 2/15/2019 6:38 PM, Gene wrote:
The problem is, what should this
array of ways of input be called? Maybe input commands, which
would cover everything. This is just one more example of the
decline of English. Apps and applications, two different
things, are used increasingly interchangeably. the language
in general is becoming less precise and accurate and this is
just one area.
Gene
----- Original Message -----
Sent: Friday, February 15, 2019 6:15 PM
Subject: Re: [nvda] Some mouse navigation
questions
On Fri, Feb 15, 2019 at 07:06 PM, Joseph Lee wrote:
Input gestures are more abstract
Which is precisely the problem. Callin something that is
intimately familiar to the typical end user, and when it's
currently the only method (regardless of keyboard being used),
something "more abstract" is not the way to go.
The folks at NV Access are far from the only software developers
to go this route. Almost every time it's the route taken it
makes things more opaque to the target demographic, which is why
it should be avoided in the first place.
--
Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763
A great deal of
intelligence can be invested in ignorance when the need
for illusion is deep.
~ Saul Bellow, To Jerusalem and
Back
|
|
Using Office Products with UIA
Hi All, Heard some folks talking about major performance improvements provided by using UIA in Microsoft Office. Does anyone know how to enable UIA for these applications? Thanks, Noah Sent from Mail for Windows 10
|
|
Re: Captcha solving services and NVDA
Bianka and others,
While I haven't used these yet, as I've only been made aware of them today, I'll pass on two captcha solvers to you.
One of them is called
Buster.
The other is called
anti-Captcha Solver.
Again, I haven't had a chance to try them out but I hope to do so shortly. I'm also planning to add a page to my Web site listening current as well as discontinued captcha solvers as I think such a service is needed.
toggle quoted messageShow quoted text
On 2/15/2019 4:58 AM, Bianka Brankovic wrote:
Hello all,
as Webvisum, at least to my knowledge, has been discontinued, I am wondering if there are affordable captcha solving services which are accessible with NVDA?
Anny input on this is highly appreciated.
Thanks and kind regards,
Bianka
|
|
Re: acapella and codefactory voices
Isaac,
I think the only way to get assistance is to contact Codefactory directly.
toggle quoted messageShow quoted text
On 2/12/2019 10:34 PM, Isaac wrote:
Hi, I was wondering how to re-authorize acapella and code factory voices I
lost my codes and all, is there a way I could get help?
-----Original Message-----
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Chris Shook
Sent: Tuesday, February 12, 2019 4:42 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] acapella and codefactory voices
Isac:
I don't see any providers in your last post.
Did I miss something?
|
|
Re: Captcha solving services and NVDA
Clarissa,
Thank you so much for making us aware of this particular captcha solver. I will certainly give it a try. Some of the instructions on their page are not very user-friendly and, to put it politely, the English translation could use some work. However, the
prices are certainly reasonable. As the blindness community is trying to wrestle with which captcha solvers are no longer available and which ones are working it occurs to me that someone needs to have a Web page with current information regarding the state
of captcha solving solutions. I need to be careful whenever I begin a sentence with "someone needs to ..." because sometimes that means that maybe I'm the someone who should consider volunteering. Therefore, I happily volunteer. If anybody else knows of captcha
solvers which are currently available, please reply publicly to the list and I will gather this information and will soon have a page about this set up on my personal Web site.
All the best and thank you again for letting us know about this service.
toggle quoted messageShow quoted text
|
|
Re: Some mouse navigation questions
Hi, No, these are categories, not commands themselves. It is possible to add touchscreen gestures to a keyboard command and vice versa, and some add-ons (including at least two of mine) come with touchscreen gestures for some keyboard commands. Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of hurrikennyandopo ... Sent: Friday, February 15, 2019 5:55 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Hi this might be a dum question or maybe not. I was looking through the input gestures for nvda and most are for the keyboard for desk top and lap top but there is also a section small as it is for touch gestures. if some one had a device where they can do touch gestures is it this section also where they asign them? When you go to the input gesture help section in the user manual it mentions more for keyboard and braile displays is the touch gestures the etc part? Gene nz On 16/02/2019 12:21 PM, Brian Vogel wrote: Richard,
I will admit that I, too, find the adoption of the terminology "input gestures" for what are almost universally dubbed "keyboard shortcuts" just plain weird. But I also know that most developers of things that have keyboard shortcuts that are add-ons know that what they have chosen as a keyboard shortcut might conflict with something else that a user has on their system and include the ability to reassign shortcuts due to this.
-- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions

Sarah k Alawami
And that'show I picture gesture. I think in pictures when I either speak or read A command is just that, an order, a key stroke, Ok, close enough.
toggle quoted messageShow quoted text
On 15 Feb 2019, at 17:46, Mary Otten wrote:
Hi Joseph, As a former translator of both Russian and Arabic, I would like to point out that the word gesture in English would never be translated as command or anything like that in either of those languages. A gesture is something you do, say with your hands or maybe your head, as in a nod. It’s a visual signal. It has absolutely nothing in the general sense to do with inputting stuff to a computer except on a touchscreen. So I don’t know how they’re translating that in other languages. But if I were translating it into Russian, I would not use the equivalent word for gesture, as it would make no sense Hi, Hmmm, Quentin, any thoughts? I think part of the problem may have to do with attempts at carrying old assumptions forward. I’m interested in your comments, as renaming a menu item will have vast consequences for not just users and English speakers, but speakers of other languages, tutorial writers, and NVDA promoters. Cheers, Joseph How about just "commands" YOu know, K I S S? Take care On 15 Feb 2019, at 16:44, Brian Vogel wrote: On Fri, Feb 15, 2019 at 07:38 PM, Gene wrote: Maybe input commands, which would cover everything.
We're all getting very meta, and quickly, aren't we?
Gene, you and I are clearly on the same page. I'd even pare off "input." In the computing world (and even regular one, really) commands are generally thought of as input to a system/person, issued from outside.
There are definitely output commands too, but in the context of programming, which the non-programmer generally has no interest in. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hi, A few years ago the Korean translations team (me included) had a small debate as to how to translate the notion of “gestures” into Korean. In the end, we settled on translating “input gestures” as “hotkey settings”. As for technical writing: having delved into that for more than a decade (starting out as a high school senior in 2008), I do know how complex the process and business of tech writing and transcoding is (transcoding refers to coding something from one form of presentation to another) and describing complex things in the process. My most recent exercise in this arena has been writing expository essays on internals of some of the most complex NVDA add-ons out there, particularly for an add-on I myself didn’t write (Systray List). Folks who’ve been in this business (professionally or as a hobby) may have a common ground in talking about the process of solidifying a concept – that is, describing an abstract thing or a process in a way that many people can understand, or make it understandable to some in hopes that they can make it easier for others to follow. Also, we see different people as role models or indirect teachers of this craft – my role model is David Pogue, a former New York Times columnist who wrote books under The Missing Manual series (I don’t think he has updated his book on Windows 10 in quite a while). Perhaps I should request a lightning talk slot in this year’s NVDACon to talk about technical writing profession… Cheers, Joseph
toggle quoted messageShow quoted text
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Mary Otten Sent: Friday, February 15, 2019 5:46 PM To: nvda@nvda.groups.io Subject: Re: [nvda] Some mouse navigation questions Hi Joseph, As a former translator of both Russian and Arabic, I would like to point out that the word gesture in English would never be translated as command or anything like that in either of those languages. A gesture is something you do, say with your hands or maybe your head, as in a nod. It’s a visual signal. It has absolutely nothing in the general sense to do with inputting stuff to a computer except on a touchscreen. So I don’t know how they’re translating that in other languages. But if I were translating it into Russian, I would not use the equivalent word for gesture, as it would make no sense Hi, Hmmm, Quentin, any thoughts? I think part of the problem may have to do with attempts at carrying old assumptions forward. I’m interested in your comments, as renaming a menu item will have vast consequences for not just users and English speakers, but speakers of other languages, tutorial writers, and NVDA promoters. Cheers, Joseph How about just "commands" YOu know, K I S S? Take care On 15 Feb 2019, at 16:44, Brian Vogel wrote: On Fri, Feb 15, 2019 at 07:38 PM, Gene wrote: Maybe input commands, which would cover everything.
We're all getting very meta, and quickly, aren't we?
Gene, you and I are clearly on the same page. I'd even pare off "input." In the computing world (and even regular one, really) commands are generally thought of as input to a system/person, issued from outside.
There are definitely output commands too, but in the context of programming, which the non-programmer generally has no interest in. -- Brian - Windows 10 Home, 64-Bit, Version 1809, Build 17763 A great deal of intelligence can be invested in ignorance when the need for illusion is deep. ~ Saul Bellow, To Jerusalem and Back
|
|
Re: Some mouse navigation questions
Hello!
In the portuguese comunity, for Portugal and Brasil, we use only "Commands" instead of "Input gestures".
Regards,
Rui Fontes Portuguese NVDA team
Às 01:36 de 16/02/2019, Joseph Lee escreveu:
toggle quoted messageShow quoted text
Hi, Hmmm, Quentin, any thoughts? I think part of the problem may have to do with attempts at carrying old assumptions forward. I’m interested in your comments, as renaming a menu item will have vast consequences for not just users and English speakers, but speakers of other languages, tutorial writers, and NVDA promoters. Cheers, Joseph *From:* nvda@nvda.groups.io <nvda@nvda.groups.io> *On Behalf Of *Sarah k Alawami *Sent:* Friday, February 15, 2019 4:54 PM *To:* nvda@nvda.groups.io *Subject:* Re: [nvda] Some mouse navigation questions How about just "commands" YOu know, K I S S? Take care On 15 Feb 2019, at 16:44, Brian Vogel wrote: On Fri, Feb 15, 2019 at 07:38 PM, Gene wrote: Maybe input commands, which would cover everything. We're all getting very meta, and quickly, aren't we? Gene, you and I are clearly on the same page. I'd even pare off "input." In the computing world (and even regular one, really) commands are generally thought of as input to a system/person, issued from outside. There are definitely output commands too, but in the context of programming, which the non-programmer generally has no interest in. -- Brian *-*Windows 10 Home, 64-Bit, Version 1809, Build 17763 /*A great deal of intelligence can be invested in ignorance when the need for illusion is deep.*/ ~ Saul Bellow, /To Jerusalem and Back/
|
|
Re: Some mouse navigation questions
Hi
this might be a dum question or maybe not.
I was looking through the input gestures for nvda and most are for the keyboard for desk top and lap top but there is also a section small as it is for touch gestures.
if some one had a device where they can do touch gestures is it this section also where they asign them?
When you go to the input gesture help section in the user manual it mentions more for keyboard and braile displays is the touch gestures the etc part?
Gene nz
toggle quoted messageShow quoted text
On 16/02/2019 12:21 PM, Brian Vogel wrote:
Richard,
I will admit that I, too, find the adoption of the terminology "input gestures" for what are almost universally dubbed "keyboard shortcuts" just plain weird. But I also know that most developers of things that have keyboard shortcuts that are add-ons
know that what they have chosen as a keyboard shortcut might conflict with something else that a user has on their system and include the ability to reassign shortcuts due to this.
--
Brian - Windows
10 Home, 64-Bit, Version 1809, Build 17763
A great deal of intelligence can be
invested in ignorance when the need for illusion is deep.
~ Saul Bellow, To Jerusalem and Back
|
|