Re: Voice Activation, was The future of NVDA
Ervin, Glenn
In one of the star trek movies, the one with the whales, Scottie tries talking to a computer, and when someone pointed to the keyboard, he knew what to do, he said something like Oh, how quaint. But even in Sci Fi, he knew what to do with it. Glenn
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io]
On Behalf Of Janet Brandly
I can see voice activation taking off in a home setting but not in an office or public setting. Keyboards will be around forever. Janet
From: Sky Mundell Sent: Thursday, May 31, 2018 10:40 PM To: nvda@nvda.groups.io Subject: [nvda] The future of NVDA
Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say. From: bj colt [mailto:bjcolt@...]
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Re: The future of NVDA
bob jutzi <jutzi1@...>
And you end up with a monstrosity like Jaws.
toggle quoted messageShow quoted text
I have Jaws as the result of Window-eyes migration, ans it does have some rathe cool features such as the Researchit tool and convenient OCR which now supports flatbed scanners, however, due to my no longer being employed, among other factors, along with the fact that I've used NVDA for over two years, I will probably stick with NVDA as my primary screen reader. In the time it takes for a full Jaws installation, I can have NVDA up and running. Plus, it supports the software I use very well.
On 6/1/2018 11:58 AM, Jackie wrote:
Here's my $.02. NVDA is designed to do 1 thing & does it reasonably
|
|
Re: Easiest Way to Upgrade an Existing Portable Installation
Richard,
Well, you don't really update an existing portable copy of NVDA, you just re-create it either during your install of the NVDA update or afterward, following the directions below: Creating a Portable Copy of NVDA from the Installed Copy 1. Open NVDA then the main NVDA menu (NVDA Key+N) 2. Down Arrow to the Tools Submenu, then right Arrow to open the Tools Submenu 3. Down arrow to the Create Portable Copy submenu item and activate. 4. In the Create Portable NVDA dialog, I find it easiest to navigate to the Browse button and activate same to get a Browse dialog to specify the folder where I want the portable copy to be placed. This is almost always a thumb drive or SD Card, but you can create it in a folder on your hard drive and later copy it elsewhere 5. If you wish to copy your current user configuration to the same folder along with the portable copy, then check the Copy current user configuration checkbox 6. If you wish to start the portable copy you're creating immediately after creating it, check the Start the new portable copy after creation checkbox 7. Navigate to the Continue button and activate. Your portable copy will be created in the location you specified in Step 3. -- Brian - Windows 10 Home, 64-Bit, Version 1803, Build 17134 Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong. ~ H.L. Mencken, AKA The Sage of Baltimore
|
|
Re: The future of NVDA
Sky,
NVDA already has the ability to use voice commands. See the dictationBridge add-on from http://dictationbridge.com. Yes, it does require Dragon Individual Professional or WSR for it to work.
I have used speech-recognition and continue to use it. Speech-recognition is good for rapid dictation. If I am writing a story, I will dictate it. However, nothing beats the keyboard when editing. The problem here is the linear nature of speech output. Sighted users can count how many words to move forward and say “move forward 10 words.” The blind user can also say the same thing but how do you count words using speech? Speech-recognition can be incredibly efficient if you have a large number of commands to handle your tasks. I cannot say if speech will ever replace the keyboard. I doubt it will for the reasons other users have outlined. The way is to think of input methods as complimentary.
Long live NVDA and the keyboard and speech-recognition! Pranav
|
|
Easiest Way to Upgrade an Existing Portable Installation
Richard Wells
Could some kind soul outline steps for upgrading an existing portable installation of NVDA? I keep having to start over each time an upgrade is released, because I can't figure out how to use the executable to direct its update path to, say, e:\nvda. Thanks in advance for your guidance.
|
|
Re: The future of NVDA
On Fri, Jun 1, 2018 at 08:55 am, Holger Fiallo wrote:
The eyes can go over all the place within seconds and look at something fast. JAWS, NVDA or another program can only go up down or left or right.It took me a much longer time than it should have to "get" this concept that should be obvious. Joseph Lee turned on the lightbulb in my head by using the phrase that the sighted see things like webpages as a gestalt. We instantly and instinctively "edit out" all irrelevant information and glide our visual focus only to those things we know in advance (or decide in the moment) that we're looking for. A screen reader, at least with both web coding (for web pages) and other AI "selection" technology being what it is today, has absolutely no way of knowing what the intent of the user is with regard to anything they're looking at. If you're looking at a contract, for instance, via screen reader it has no idea whether you care about "the fine print" or not and must present everything as a result. It can be, and often is, maddening even once you're used to it. It's also interesting, at least when I'm working with someone who was formerly sighted and we have one of the voice synthesizers that really sounds human, how there is a hesitation to interrupt/cut off the screen reader voice output. I believe this is because we're socialized that it's rude to interrupt and when something sounds sufficiently human as to be indistinguishable (or very nearly so) from same there is a reluctance to cut it off. This is not so pronounced with the more "robotic" voices, but it is still there to some extent. One of the first things I try to drive home is that you can, and must, get used to cutting off the screen reader once you've heard enough to determine that something's not of interest or that you need to do a much more strategic "read through" than just allowing the screen reader to start at the beginning, and with web pages that always includes lots of junk links, and continue through to the end. I have to say that, in conjunction with screen readers, I am absolutely loving the "Just Read" extension/add-on for Chrome and Firefox, respectively. It does the best job I've seen so far of extracting the text that I, as a sighted person, am looking for when I arrive on a webpage, removing all the links, etc., and just reading it as though I were reading it. It's a far more natural way of listening to most text being read than having a screen reader on default settings read it. You can always go back later to do a check for links, etc., in the screen reader itself if that seems to be warranted. -- Brian - Windows 10 Home, 64-Bit, Version 1803, Build 17134 Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong. ~ H.L. Mencken, AKA The Sage of Baltimore
|
|
Re: The future of NVDA
Jackie
Here's my $.02. NVDA is designed to do 1 thing & does it reasonably
well, which is to be a *screen reader.* As such, it should certainly work w/other software that does speech recognition, but speech recognition should not in any way be a function of NVDA. You start trying to incorporate too many functions into a program, & it ends up doing none of them well. I was hearing back in 98 how voice recognition was going to be the bee-all & end-all for computers. Transcriptionists were going to lose their jobs in droves, all computers would type letter-perfect when you spoke, etc. Could you imagine an office full of cubicles where everyone was talking to their machines? It'd be a frickin zoo! We are certainly reaching a point where dictation to one's device is becoming an increasing reality, but as a sight-impaired computer user, you've still got to have something that lets you know what's onscreen. Unless MS decides to bring Narrator beyond the level of Voiceover, or some sort of artificial eyesight becomes a reality, (& I'd love nothing more than to see either of those happen), you'll always need something like NVDA. On 6/1/18, The Gamages via Groups.Io <james.gamage@...> wrote: Hello Gene, -- Remember! Friends Help Friends Be Cybersafe Jackie McBride Helping Cybercrime Victims 1 Person at a Time https://brighter-vision.com
|
|
Re: The future of NVDA
Holger Fiallo <hfiallo@...>
Yes. As a formal sighted person I agreed. The eyes can go over all the
place within seconds and look at something fast. JAWS, NVDA or another program
can only go up down or left orright.
From: Gene
Sent: Friday, June 1, 2018 10:48 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA Thank you. I should clarify a point I
made. It is faster to skim a document by sight. But straight reading
or listening may be as fast for a blind person. I haven't asked sighted
people about this, but I generally listen at about 350 words per minute and I
can listen without loss of comprehension, though its more taxing, at about 400
words per minute. Others can listen at faster speeds, I gather, without
loss of comprehension. I don't know how taxing faster listening is for
those who do so without comprehension loss. I don't know what the average
sighted person's speed is of reading a computer screen. The statistic I've
heard is that the average reading speed for a sighted person is about 300 words
per minute.
But the inefficiency is to try to skim using speech
or Braille compared to sight and to edit as well. I don't know how many
blind people realize this, but a sighted person can review a document, find
something that needs correction, such as a word to be changed or a phrase to be
altered, click the mouse wherever he wants to make the change, and thus
immediately move the cursor to that place. That is much faster than
listening to a document or skimming a document by speech and moving to the
place, moving by line, if necessary, then by word, then by character if
the edit is not at the immediate beginning of a line or at the very end.
I believe that Braille displays in general, have a
feature that allows you to move the cursor to where you are reading and that
would be much more efficient than speech so I won't compare Braille movement in
editing to sighted people since I don't know enough about it.
I wanted to clarify that straight reading can be done very efficiently by speech or by Braille if the person is good at fast listening or reading. My other comments don't need to be changed. And keep in mind that I'm discussing working from
the keyboard and using a screen-reader for speech in the comments I
modified. My comments about using voice commands to do such things are
unchanged.
Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 10:16 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA Hello Gene,
You are so correct, having been a sighted person, I agree, it is far
quicker to read a document visually than to hear it read out, the eye can
assimilate information far beyond the capabilities of the ear and far
quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on a
touch screen, progress?
Like most things, we are stuck with voice output to read things, as blind
we don’t have much choice, so a mixture of technologies is the way to go, we use
the things that suit our needs and leave others to do the same, I’ve said it
before, Long live NVDA.
Best Regards,
Jim.
From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA Your friend is so biased that his opinions about
Window-eyes and JAWS are highly suspect. And he so much wants something to
be so that he extrapolates without considering very important factors.
Whatever happens to keyboards, some sort of ability for sighted people to do
things on a screen in other means than speech will remain, touch screens, for
example. Consider some examples:
Consider reviewing a rough draft. Which is
faster? A sighted person is not going to listen to an entire document
being read, looking for alterations to make in a draft nor is he/she going to
waste time telling the word processor to find the phrase, and continue speaking
from the stop of the phrase until he says start to define the end of the phrase,
then take some sort of action such as delete it. If he wants to delete a
phrase, what is the person going to do, move to a passage using speech, mark the
start of the passage with speech, then mark the end of the passage with speech
then say delete, then say insert and speak a new passage? The same with
copying and pasting from one document to another,
And such operations are also far more efficient
using a keyboard. I should add that I haven't used programs that operate a
computer with speech. If I'm wrong, and people who use such programs know
I am wrong, I await correction. That's how things appear to
me.
What about file management? Consider using
speech to tell a computer you want to delete fifteen noncontiguous files in a
list of two hundred. Consider how you might do it with speech as opposed
to using a keyboard.
And considerations of speed and efficiency are true
when using the keyboard and a screen-reader as well. I've mainly discussed
sighted users because innovations are developed for sighted users.
Speech will become increasingly popular and
powerful. It won't replace visual access and manipulation in
computers.
I don't use spread sheets but I expect those who do
may point out how cumbersome it would be to use speech with a spread sheet to
perform any somewhat complex series of operations with a screen-reader and some
may want to comment on the visual comparison..
As for JAWS versus Window-eyes, I won't say much
but it's not the fault of JAWS if the person was misled by his college advisor
to learn a screen-reader that has always been a far second in terms of its use
in business and institutions. He should take his anger at FS, if he must
spend so much time and energy being angry, and direct it where it belongs.
I could write paragraphs about why JAWS was dominant, some of it because it got
started first in the DOS screen-reader arena, some of it because it built up all
sorts of relationships with institutions, and some because it was better for
more employment situations than Window-eyes. How many years did
Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be
respected. Words have meanings and you can't, as one of the carachters
does in Through the Looking Glass, use any word to mean anything desired.
But enough. I'll leave the discussion to
others from this point unless I have something additional to add.
Gene
----- Original Message -----
From: The Gamages via
Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think about
it.
By his closing your friend is a Trekkie, [star trec fan]
Best Regards,
Jim.
From: Sky
Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say. From: bj colt
[mailto:bjcolt@...]
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Voice Activation, was The future of NVDA
Janet Brandly
I can see voice activation taking off in a home setting but not in an
office or public setting. Keyboards will be around forever.
Janet
From: Sky
Mundell
Sent: Thursday, May 31, 2018 10:40 PM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.
From: bj colt
[mailto:bjcolt@...]
Sent: Thursday, May 31, 2018 8:12 AM To: Sky Mundell Subject: Re: CSUN
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Re: The future of NVDA
Gene
Thank you. I should clarify a point I
made. It is faster to skim a document by sight. But straight
reading or listening may be as fast for a blind person. I haven't asked
sighted people about this, but I generally listen at about 350 words per minute
and I can listen without loss of comprehension, though its more taxing, at about
400 words per minute. Others can listen at faster speeds, I gather,
without loss of comprehension. I don't know how taxing faster listening is
for those who do so without comprehension loss. I don't know what the
average sighted person's speed is of reading a computer screen. The
statistic I've heard is that the average reading speed for a sighted person is
about 300 words per minute.
But the inefficiency is to try to skim using speech
or Braille compared to sight and to edit as well. I don't know how many
blind people realize this, but a sighted person can review a document, find
something that needs correction, such as a word to be changed or a phrase to be
altered, click the mouse wherever he wants to make the change, and thus
immediately move the cursor to that place. That is much faster than
listening to a document or skimming a document by speech and moving to the
place, moving by line, if necessary, then by word, then by character if
the edit is not at the immediate beginning of a line or at the very end.
I believe that Braille displays in general, have a
feature that allows you to move the cursor to where you are reading and that
would be much more efficient than speech so I won't compare Braille movement in
editing to sighted people since I don't know enough about it.
I wanted to clarify that straight reading can be done very efficiently by speech or by Braille if the person is good at fast listening or reading. My other comments don't need to be changed. And keep in mind that I'm discussing working from
the keyboard and using a screen-reader for speech in the comments I
modified. My comments about using voice commands to do such things are
unchanged.
Gene
----- Original Message -----
Sent: Friday, June 01, 2018 10:16 AM
Subject: Re: [nvda] The future of NVDA Hello Gene,
You are so correct, having been a sighted person, I agree, it is far
quicker to read a document visually than to hear it read out, the eye can
assimilate information far beyond the capabilities of the ear and far
quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on a
touch screen, progress?
Like most things, we are stuck with voice output to read things, as blind
we don’t have much choice, so a mixture of technologies is the way to go, we use
the things that suit our needs and leave others to do the same, I’ve said it
before, Long live NVDA.
Best Regards,
Jim.
Your friend is so biased that his opinions about
Window-eyes and JAWS are highly suspect. And he so much wants something to
be so that he extrapolates without considering very important factors.
Whatever happens to keyboards, some sort of ability for sighted people to do
things on a screen in other means than speech will remain, touch screens, for
example. Consider some examples:
Consider reviewing a rough draft. Which is
faster? A sighted person is not going to listen to an entire document
being read, looking for alterations to make in a draft nor is he/she going to
waste time telling the word processor to find the phrase, and continue speaking
from the stop of the phrase until he says start to define the end of the phrase,
then take some sort of action such as delete it. If he wants to delete a
phrase, what is the person going to do, move to a passage using speech, mark the
start of the passage with speech, then mark the end of the passage with speech
then say delete, then say insert and speak a new passage? The same with
copying and pasting from one document to another,
And such operations are also far more efficient
using a keyboard. I should add that I haven't used programs that operate a
computer with speech. If I'm wrong, and people who use such programs know
I am wrong, I await correction. That's how things appear to
me.
What about file management? Consider using
speech to tell a computer you want to delete fifteen noncontiguous files in a
list of two hundred. Consider how you might do it with speech as opposed
to using a keyboard.
And considerations of speed and efficiency are true
when using the keyboard and a screen-reader as well. I've mainly discussed
sighted users because innovations are developed for sighted users.
Speech will become increasingly popular and
powerful. It won't replace visual access and manipulation in
computers.
I don't use spread sheets but I expect those who do
may point out how cumbersome it would be to use speech with a spread sheet to
perform any somewhat complex series of operations with a screen-reader and some
may want to comment on the visual comparison..
As for JAWS versus Window-eyes, I won't say much
but it's not the fault of JAWS if the person was misled by his college advisor
to learn a screen-reader that has always been a far second in terms of its use
in business and institutions. He should take his anger at FS, if he must
spend so much time and energy being angry, and direct it where it belongs.
I could write paragraphs about why JAWS was dominant, some of it because it got
started first in the DOS screen-reader arena, some of it because it built up all
sorts of relationships with institutions, and some because it was better for
more employment situations than Window-eyes. How many years did
Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be
respected. Words have meanings and you can't, as one of the carachters
does in Through the Looking Glass, use any word to mean anything desired.
But enough. I'll leave the discussion to
others from this point unless I have something additional to add.
Gene
----- Original Message -----
voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think about
it.
By his closing your friend is a Trekkie, [star trec fan]
Best Regards,
Jim.
Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say. From: bj colt
[mailto:bjcolt@...]
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Re: The future of NVDA
The Gamages
Nothing of it really, I also like Startrek, which is why I picked up on
the
closing in the message, eg, “Live long and prosper”, perhaps not
relevant
to this discussion, just a little light banter, [smile].
Best
Regards,
Jim.
I also like Star Trek; what of it?
On 6/1/2018 3:45 AM, The Gamages via Groups.Io
wrote:
-- They Ask Me If I'm Happy; I say Yes. They ask: "How Happy are You?" I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"
|
|
Re: The future of NVDA
The Gamages
Hello Gene,
You are so correct, having been a sighted person, I agree, it is far
quicker to read a document visually than to hear it read out, the eye can
assimilate information far beyond the capabilities of the ear and far
quicker.
You also explain vividly the nightmare of trying to edit with
voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on
a
touch screen, progress?
Like most things, we are stuck with voice output to read things, as
blind
we don’t have much choice, so a mixture of technologies is the way to go, we
use
the things that suit our needs and leave others to do the same, I’ve said it
before, Long live NVDA.
Best
Regards,
Jim.
Your friend is so biased that his opinions
about
Window-eyes and JAWS are highly suspect. And he so much wants
something to
be so that he extrapolates without considering very important factors.
Whatever happens to keyboards, some sort of ability for sighted people to do
things on a screen in other means than speech will remain, touch screens,
for
example. Consider some examples:
Consider reviewing a rough draft. Which
is
faster? A sighted person is not going to listen to an entire document
being read, looking for alterations to make in a draft nor is he/she going
to
waste time telling the word processor to find the phrase, and continue
speaking
from the stop of the phrase until he says start to define the end of the
phrase,
then take some sort of action such as delete it. If he wants to delete
a
phrase, what is the person going to do, move to a passage using speech, mark
the
start of the passage with speech, then mark the end of the passage with
speech
then say delete, then say insert and speak a new passage? The same
with
copying and pasting from one document to another,
And such operations are also far more efficient
using a keyboard. I should add that I haven't used programs that
operate a
computer with speech. If I'm wrong, and people who use such programs
know
I am wrong, I await correction. That's how things appear to
me.
What about file management? Consider
using
speech to tell a computer you want to delete fifteen noncontiguous files in
a
list of two hundred. Consider how you might do it with speech as
opposed
to using a keyboard.
And considerations of speed and efficiency are
true
when using the keyboard and a screen-reader as well. I've mainly
discussed
sighted users because innovations are developed for sighted users.
Speech will become increasingly popular and
powerful. It won't replace visual access and manipulation in
computers.
I don't use spread sheets but I expect those
who do
may point out how cumbersome it would be to use speech with a spread sheet
to
perform any somewhat complex series of operations with a screen-reader and
some
may want to comment on the visual comparison..
As for JAWS versus Window-eyes, I won't say
much
but it's not the fault of JAWS if the person was misled by his college
advisor
to learn a screen-reader that has always been a far second in terms of its
use
in business and institutions. He should take his anger at FS, if he
must
spend so much time and energy being angry, and direct it where it
belongs.
I could write paragraphs about why JAWS was dominant, some of it because it
got
started first in the DOS screen-reader arena, some of it because it built up
all
sorts of relationships with institutions, and some because it was better for
more employment situations than Window-eyes. How many years did
Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be
respected. Words have meanings and you can't, as one of the carachters
does in Through the Looking Glass, use any word to mean anything
desired.
But enough. I'll leave the discussion to
others from this point unless I have something additional to
add.
Gene
----- Original Message -----
voice commands, fine, but how does your friend check what he has
ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about
it.
By his closing your friend is a Trekkie, [star trec fan]
Best
Regards,
Jim.
Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say. From: bj
colt
[mailto:bjcolt@...]
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Re: The future of NVDA
Here's an opinion from someone who can see and has been in the IT world for decades now: Voice control of everything is not now and never will be "the next big thing" nor the primary interface to computers for the vast majority of users and the vast majority of tasks.
Others have already pointed out, and very well, why this is true. One of my previous clients required the use of Dragon Naturally Speaking to do everything with his computer. It has come leaps and bounds from where it started, including speech recognition with very minimal training for an individual with a markedly atypical speech pattern, with very minimal training of the software. Even with that being the case, controlling a computer and all its functions by voice, even for a very skilled user, is far, far, far slower than accessing it via a keyboard or other "finger driven" interface. This is particularly so with regard to text-intensive applications, where one is endlessly pausing, thinking, jumping back a word or two and correcting or inserting something, etc. Most of what I do on a computer every day is not nearly as amenable to efficient and easy control by voice as it is via the conventional keyboard and mouse. Reports of the death of technologies that are the direct descendants of things that have been in use for over 100 years now are being greatly exaggerated. The manual typewriter, electric typewriter, dedicated word processor, and word processor software - as well as things like e-mail - haven't hung around using a keyboard because voice recognition is not plenty mature enough to handle them. Anyone who uses a smartphone and texts via voice knows just how eerily accurate voice recognition is "out of the box" for myriad voices and accents these days. They also know that as soon as they get beyond a sentence or two that using dictation usually begins to feel really, really messy compared to the "think and type, think and type, think and type" cycle does for anyone who has been trained in how to type efficiently (or even not so efficiently). -- Brian - Windows 10 Home, 64-Bit, Version 1803, Build 17134 Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong. ~ H.L. Mencken, AKA The Sage of Baltimore
|
|
Re: The future of NVDA
Engin Albayrak <mealbayrak@...>
Hello. Blind pc users mostly use keyboard. So, I think, voice commands is not an important need for nvda
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Sky Mundell
Sent: Friday, June 1, 2018 10:42 AM To: nvda@nvda.groups.io Subject: Re: [nvda] The future of NVDA
OK. Let me explain why he is glee. BJ which is what his name is from Scotland. He was taught at Motherwell college. Motherwell college at that time had a V I program which taught A screen reading program, called Window-Eyes, and not JAWS. The guy that taught Window-Eyes in that college was named Robert Donald. This guy was a strong advocate of it, and Robert didn’t like how JAWS was always favoured by the organisation in the UK called the RNIB. My friend BJ was offered a job at a university called Glasgow university and Robert Donald, the person behind the Window-Eyes classes in Motherwell college was in contact with Glasgow university and the person at the disability office asked him if he knew anybody who could teach Window-Eyes. He put my friend BJ’s name forward and the person who offered BJ the job told BJ that she had to speak to the head of the office and she promised BJ that she would get back to him. Well, a few months came and BJ never heard from this woman. So he phoned the lady who offered him the job, and the lady told BJ that they had contacted the RNIB and the RNIB advised them to get JAWS. She did apologise to BJ, and said, that if they ever changed to Window-Eyes, she would contact BJ before anyone else. BJ wasn’t happy that he lost a job and BJ was hoping for that job and it was almost certain that BJ would have gotten the job. Well, a few years later, My friend was on the talks mailing list, and you might remember Talks as the screen reader for the nokia phones. Well on the Talks list, there was a discussion that was going on about the price of items specifically for the blind. One of the members commented that the education system in the UK and worldwide favoured JAWS above every other screen reader. Then, the vender denied the member’s allegations. Then, BJ wrote on the list to the vender and told him straight that venders like him favoured JAWS and always will, and he told the vender he had been in the education system for eight years and that he knew of the situation about JAWS and the RNIB. The guy told him there was never any favoritism of JAWS over any other screen reader in education and employment. Then, it became a heated argument between members and the vender over the favoritism of JAWS over any other screen reader in education, and members stood up for BJ, while others stood up for the vender. Then it got so out of focus that BJ left the Talks list. Then, about 7 years ago, BJ’s girlfriend, Carol, who lived in Ontario, went to the school for the blind in Ontario to get assessed for a new computer. She, like BJ, was a Window-Eyes user and they had met on the former GW Micro mailing list, called GW-info. When she spoke to the elderly guy about what she wanted, and that she wanted Window-Eyes, The elderly guy was very insistent that Carol took JAWS at the start of the meeting, and Carol was adamant that she was under no circumstances taking JAWS. She told the guy that she wanted Window-Eyes and nothing else. The guy also explained that Window-Eyes was going bust. At that time, this was in 2011, and Window-Eyes, at that time, were going nowhere in 2011. She also told the guy that she had a boyfriend in Scotland who taught her everything she knew about Window-Eyes, and she told the guy he was talking a lot of pure crap. When she got the new computer, the computer was an HP, and it was pure junk, and it was somehow registered to somebody else. BJ had to spend a few days removing the registered name, and putting her name as the admin. He also had to help her install the programs as the engineer made a complete mess of the programs. So there is the story about this guy and why he has bad feelings for FS and JAWS, even the RNIB and other blind organisations. Thanks, Sky. .groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Ron Canazzi
I am not qualified enough to comment on this subject, but I'll say one thing for the author. He seems to be filled with glee over the prospect of the demise of VFO FKA Freedom Scientific.
On 6/1/2018 12:40 AM, Sky Mundell wrote:
-- They Ask Me If I'm Happy; I say Yes. They ask: "How Happy are You?" I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"
|
|
Re: Why ar NVDA next versions playing two musics when are installing?
Ângelo Abrantes
The problem does not appear while the files are being extracted, at
which point I hear several beeps.
toggle quoted messageShow quoted text
The situation appears immediately after you have answered the NVDA question. In the next update, I'll try to disable avast, as I'm tempted to think it might be a virus issue. Greetings. Ângelo Abrantes Angelo Abrantes Às 13:40 de 01-06-2018, Chris via
Groups.Io escreveu:
|
|
Re: Why ar NVDA next versions playing two musics when are installing?
Chris
Is that not normal then? Whilst it extracts the files?
From: Ângelo Abrantes
Sent: 01 June 2018 12:21 To: nvda@nvda.groups.io Subject: Re: [nvda] Why ar NVDA next versions playing two musics when are installing?
when I begin to run the installer, after update. It plays the normal music, and then, before starting the installation, it plays the same music again. Ângelo Abrantes
|
|
Re: The future of NVDA
erik burggraaf <erik@...>
Speech interfaces for computers have been commercially viable for at least 30 years. However, they're not commercially successful. Even after 30 years, 50 to 100 hours of training is required to get fully accurate voice dictation. The cost of commercial products is still exorbitantly high, because the products are built for medical markets where cost is less a factor. Computers themselves, especially desktop computers, or so complex that the number of voice commands required to fully use a computer is astronomically High. Moreover, most people are not comfortable talking to a computer. Most people in fact are not even comfortable leaving a message on somebody's voicemail. Just go check your messages. You will hear a lot of nervous stuttering. I recently conducted a training on Jaws for windows with dragon and JC. The amount of overhead required to browse the internet was so high, that the excellent business laptop bought for the purpose could not keep up. Those 3 products working in conjunction only support Windows 7, Internet Explorer, and Office 2013. They say it will work with office 2016, but don't recommend it. So, a user that requires that interface is left with a legacy operating system n secure browsing and other system factors. Not liking vfo is just good sense. Not supporting vfo with your cash dollars is excellent policy. I'm sorry your friend is carrying a personal Grudge. It sounds like he has at least some good reason. Dispersion of light is not a great argument for the future success technology the period the fact of the matter is, voice dictation is simply not up to the level of speed, accuracy, and start-up efficiency you can get from a keyboard and mouse. Even a touch screen is far more efficient. Unless you have no access to these things because of motor or physical impairment, there's really no justification for it. Morning To close off, let me say that I dictated this entire message, with a few stops to collect my thoughts. For demonstration purposes, I left all of the mistakes in place, so that you could see what it really looks like. I'm sure you've seen this before. It just goes to show that the keyboard is going to be around for quite a long time. Have fun, Erik
On June 1, 2018 12:41:14 AM "Sky Mundell" <skyt@...> wrote:
|
|
Re: The future of NVDA
Lino Morales <linomorales001@...>
Well ask NV Access during NVDA Con which starts today 3PM. ED today. The schedule can be found at: www.nvdacon.org/
Sent from Mail for Windows 10
From: nvda@nvda.groups.io <nvda@nvda.groups.io> on behalf of Sky Mundell <skyt@...>
Sent: Friday, June 1, 2018 12:40:43 AM To: nvda@nvda.groups.io Subject: [nvda] The future of NVDA Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say. From: bj colt [mailto:bjcolt@...]
Hi Sky,
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
Live long and prosper, John
|
|
Re: The future of NVDA
Ron Canazzi
I also like Star Trek; what of it?
On 6/1/2018 3:45 AM, The Gamages via
Groups.Io wrote:
-- They Ask Me If I'm Happy; I say Yes. They ask: "How Happy are You?" I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"
|
|
Re: Why ar NVDA next versions playing two musics when are installing?
Ângelo Abrantes
when I begin to run the installer, after update. It plays the normal music, and then, before starting the installation, it plays the same music again.Greetings. Ângelo Abrantes
Às 12:13 de 01-06-2018, Quentin Christensen escreveu:
|
|