The future of NVDA


The Gamages
 

Hello Gene,
 
Without actually highlighting it, you have made a very important point, that is that we are all human beings with different levels of expertise, I have nothing but admiration for everyone on this and other lists of this nature, they have all taken what life has landed them with and are getting on with things and learning all the time.
Thanks everyone for a great discussion list.
 
 
 
Best Regards, Jim.
 

From: Gene
Sent: Friday, June 01, 2018 4:48 PM
Subject: Re: [nvda] The future of NVDA
 
Thank you.  I should clarify a point I made.  It is faster to skim a document by sight.  But straight reading or listening may be as fast for a blind person.  I haven't asked sighted people about this, but I generally listen at about 350 words per minute and I can listen without loss of comprehension, though its more taxing, at about 400 words per minute.  Others can listen at faster speeds, I gather, without loss of comprehension.  I don't know how taxing faster listening is for those who do so without comprehension loss.  I don't know what the average sighted person's speed is of reading a computer screen.  The statistic I've heard is that the average reading speed for a sighted person is about 300 words per minute. 
 
But the inefficiency is to try to skim using speech or Braille compared to sight and to edit as well.  I don't know how many blind people realize this, but a sighted person can review a document, find something that needs correction, such as a word to be changed or a phrase to be altered, click the mouse wherever he wants to make the change, and thus immediately move the cursor to that place.  That is much faster than listening to a document or skimming a document by speech and moving to the place, moving by line, if necessary,  then by word, then by character if the edit is not at the immediate beginning of a line or at the very end. 
 
I believe that Braille displays in general, have a feature that allows you to move the cursor to where you are reading and that would be much more efficient than speech so I won't compare Braille movement in editing to sighted people since I don't know enough about it. 

I wanted to clarify that straight reading can be done very efficiently by speech or by Braille if the person is good at fast listening or reading.  My other comments don't need to be changed.
 
And keep in mind that I'm discussing working from the keyboard and using a screen-reader for speech in the comments I modified.  My comments about using voice commands to do such things are unchanged.
 
Gene
----- Original Message -----
Sent: Friday, June 01, 2018 10:16 AM
Subject: Re: [nvda] The future of NVDA
 
Hello Gene,
 
You are so correct, having been a sighted person, I agree, it is far quicker to read a document visually than to hear it read out, the eye can assimilate information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit  with voice commands.
I spent years learning to touch type, I learn from sighted friends and relatives that they mainly use one or two fingers to type onto a keyboard on a touch screen, progress?
 
Like most things, we are stuck with voice output to read things, as blind we don’t have much choice, so a mixture of technologies is the way to go, we use the things that suit our needs and leave others to do the same, I’ve said it before, Long live NVDA.
 
Best Regards, Jim.
 
From: Gene
Sent: Friday, June 01, 2018 11:43 AM
Subject: Re: [nvda] The future of NVDA
 
Your friend is so biased that his opinions about Window-eyes and JAWS are highly suspect.  And he so much wants something to be so that he extrapolates without considering very important factors.  Whatever happens to keyboards, some sort of ability for sighted people to do things on a screen in other means than speech will remain, touch screens, for example.  Consider some examples:
 
Consider reviewing a rough draft.  Which is faster?  A sighted person is not going to listen to an entire document being read, looking for alterations to make in a draft nor is he/she going to waste time telling the word processor to find the phrase, and continue speaking from the stop of the phrase until he says start to define the end of the phrase, then take some sort of action such as delete it.  If he wants to delete a phrase, what is the person going to do, move to a passage using speech, mark the start of the passage with speech, then mark the end of the passage with speech then say delete, then say insert and speak a new passage?  The same with copying and pasting from one document to another, 
 
And such operations are also far more efficient using a keyboard.  I should add that I haven't used programs that operate a computer with speech.  If I'm wrong, and people who use such programs know I am wrong, I await correction.  That's how things appear to me.
 
What about file management?  Consider using speech to tell a computer you want to delete fifteen noncontiguous files in a list of two hundred.  Consider how you might do it with speech as opposed to using a keyboard. 
 
And considerations of speed and efficiency are true when using the keyboard and a screen-reader as well.  I've mainly discussed sighted users because innovations are developed for sighted users. 
 
Speech will become increasingly popular and powerful.  It won't replace visual access and manipulation in computers. 
 
I don't use spread sheets but I expect those who do may point out how cumbersome it would be to use speech with a spread sheet to perform any somewhat complex series of operations with a screen-reader and some may want to comment on the visual comparison.. 
 
As for JAWS versus Window-eyes, I won't say much but it's not the fault of JAWS if the person was misled by his college advisor to learn a screen-reader that has always been a far second in terms of its use in business and institutions.  He should take his anger at FS, if he must spend so much time and energy being angry, and direct it where it belongs.  I could write paragraphs about why JAWS was dominant, some of it because it got started first in the DOS screen-reader arena, some of it because it built up all sorts of relationships with institutions, and some because it was better for more employment situations than Window-eyes.  How many years did Window-eyes refuse to use scripts and limit the functionality of the screen-reader in a stubborn attempt to distinguish itself from JAWS?  Finally, what did they do?  They used scripts, which they didn't call scripts, but apps.  They weren't apps, and language should be respected.  Words have meanings and you can't, as one of the carachters does in Through the Looking Glass, use any word to mean anything desired. 
 
But enough.  I'll leave the discussion to others from this point unless I have something additional to add.
 
Gene
----- Original Message -----
Sent: Friday, June 01, 2018 2:45 AM
Subject: Re: [nvda] The future of NVDA
 
voice commands, fine, but how does your friend check what he has ordered? just a leap of faith, or a sort of screen reader which tells him, think about it.
 
By his closing your friend is a Trekkie, [star trec fan]
 
 
Best Regards, Jim.
 
Sent: Friday, June 01, 2018 5:40 AM
Subject: [nvda] The future of NVDA
 

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@...]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN

 

Hi Sky,

 

I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.

 

I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.

 

Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.

 

Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.

 

There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.

 

Live long and prosper, John

 


Pranav Lal
 

Brian,
<snip I have a story about one user who got so annoyed with the dictation he
swore at it, it transcribed all the swear words perfectly of course!
PL] I have heard it before.

Pranav


Sarah k Alawami
 

I never  went back and read the message via braille so I can personally say that the message sounded fine and I thought you typed it all. So yeah  I think dictation is the wave of the future, or maybe the way of the future? I did dictate maybe 10 percent of my college thesis as I was lazy and did not want to type it, besides I articulate how  I type anyway so no issues there.

On Jun 1, 2018, at 4:51 AM, erik burggraaf <erik@...> wrote:

Speech interfaces for computers have been commercially viable for at least 30 years. However, they're not commercially successful. Even after 30 years, 50 to 100 hours of training is required to get fully accurate voice dictation. The cost of commercial products is still exorbitantly high, because the products are built for medical markets where cost is less a factor. Computers themselves, especially desktop computers, or so complex that the number of voice commands required to fully use a computer is astronomically High. Moreover, most people are not comfortable talking to a computer. Most people in fact are not even comfortable leaving a message on somebody's voicemail. Just go check your messages. You will hear a lot of nervous stuttering. I recently conducted a training on Jaws for windows with dragon and JC. The amount of overhead required to browse the internet was so high, that the excellent business laptop bought for the purpose could not keep up. Those 3 products working in conjunction only support Windows 7, Internet Explorer, and Office 2013. They say it will work with office 2016, but don't recommend it. So, a user that requires that interface is left with a legacy operating system n secure browsing and other system factors.

Not liking vfo is just good sense. Not supporting vfo with your cash dollars is excellent policy. I'm sorry your friend is carrying a personal Grudge. It sounds like he has at least some good reason. Dispersion of light is not a great argument for the future success technology the period the fact of the matter is, voice dictation is simply not up to the level of speed, accuracy, and start-up efficiency you can get from a keyboard and mouse. Even a touch screen is far more efficient. Unless you have no access to these things because of motor or physical impairment, there's really no justification for it. Morning

To close off, let me say that I dictated this entire message, with a few stops to collect my thoughts. For demonstration purposes, I left all of the mistakes in place, so that you could see what it really looks like. I'm sure you've seen this before. It just goes to show that the keyboard is going to be around for quite a long time. Have fun, Erik

On June 1, 2018 12:41:14 AM "Sky Mundell" <skyt@...> wrote:

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.
From: bj colt [mailto:bjcolt@...] 
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN
 
Hi Sky,
 
I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.
 
I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.
 
Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go. 
 
Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.
 
There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.
 
Live long and prosper, John
 


 

Prediction, and autocorrect can be a blessing but most of the time its not we are not  there yet.

Sadly people, governments and others especially tend to trust the machine, when even if the system, and its software is good, its data may not be.

I wouldn't trust the dataset my speech synth has, though I would trust mostly the pc, the os and the software.

On 6/2/2018 6:55 AM, Brian's Mail list account via Groups.Io wrote:
I have a story about one user who got so annoyed with the dictation he swore at it, it transcribed all the swear words perfectly of course!

Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.
----- Original Message ----- From: "Pranav Lal" <pranav.lal@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 5:59 PM
Subject: Re: [nvda] The future of NVDA


Sky,



NVDA already has the ability to use voice commands. See the dictationBridge
add-on from http://dictationbridge.com. Yes, it does require Dragon
Individual Professional or WSR for it to work.



I have used speech-recognition and continue to use it. Speech-recognition is
good for rapid dictation. If I am writing a story, I will dictate it.
However, nothing beats the keyboard when editing. The problem here is the
linear nature of speech output. Sighted users can count how many words to
move forward and say "move forward 10 words."  The blind user can also say
the same thing but how do you count words using speech? Speech-recognition
can be incredibly efficient if you have a large number of commands to handle
your tasks. I cannot say if speech will ever replace the keyboard. I doubt
it will for the reasons other users have outlined. The way is to think of
input methods as complimentary.



Long live NVDA and the keyboard and speech-recognition!

Pranav




 

Well a lot of people have reason to be angry with vfo and groups before it.

I pulled out of jaws when I made the jump to win7 happy to upgrade dolphin stuff rather than that.

Business wize there isn't much chance in me upgrading vfo products, of course if I actually start working then I will probably have to use jaws especially if its a custom software package.

It goes without saying that work is all I will use it for though.

On 6/2/2018 6:49 AM, Rosemarie Chavarria wrote:
Hi, Sky,


I doubt that voice commands will come to NVDA. I think the keyboard will
still be around for a long time. I don't know if your friend is aware of
this but you have to pay to use something like google shop or whatever it's
called. I do have google home but I don't use it for shopping because I
don't have the money to pay for that service. I'm so sorry that your friend
is so angry with VFO.


Rosemarie


From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Sky
Mundell
Sent: Thursday, May 31, 2018 9:41 PM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA


Hello NVDA community. It's Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what
my friend means is that in three years, the computers, etc. will all be done
via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN


Hi Sky,


I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other
apps using voice only ordering.


I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said
this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.
They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping. This does spell
the end of VFO.


Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.


Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.


There will be some people who won't change. Or use both methods to carry out
tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.


Live long and prosper, John





 

To be honest, since we need to have nvda installed in systems for propper use of secured screens and login prompts, I do wander if we should revisit the graphics intercepter driver deal.

The difference unlike other comercial readers is these shouldn't be the core of the reader, and need to be installed, but if the reader is installed, if that could be an option for the reader to refer to if for example its controler client is not present or there isn't anything else, that could be an option.

Ofcause if the program is to visual then it is.

On the other side, graphics intercepts don't always see things right either.

And I am apposed of requiring intercepts to start with your computer and mangling display drivers and the like especially with entertainment cards that get frequent updates.

If there was a way to have an interception driver addon installed that would do that without it being tied to the system then that would be nice but still its an idea.

In some cases intercepters work well as do virtual cursors.

Most of the modern web controls which a lot of oses now are based on is nvda's strong point.

On 6/2/2018 6:48 AM, Brian's Mail list account via Groups.Io wrote:
Yes, I've been helping an ECLO recently who has Jaws provided by Access to work, but she is always cursing it when it acts downright  stupid for no apparent reason. The problem is that only Jaws has scripts written for the software used inside the NHS. This is at the moment a limitation for nvda.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.
----- Original Message ----- From: "bob jutzi" <jutzi1@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 6:41 PM
Subject: Re: [nvda] The future of NVDA


And you end up with a monstrosity like Jaws.
I have Jaws as the result of Window-eyes migration, ans it does have some rathe cool features such as the Researchit tool and convenient OCR which now supports flatbed scanners, however, due to my no longer being employed, among other factors, along with the fact that I've used NVDA for over two years, I will probably stick with NVDA as my primary screen reader.  In the time it takes for a full Jaws installation, I can have NVDA up and running.  Plus, it supports the software I use very well.

On 6/1/2018 11:58 AM, Jackie wrote:
Here's my $.02. NVDA is designed to do 1 thing & does it reasonably
well, which is to be a *screen reader.* As such, it should certainly
work w/other software that does speech recognition, but speech
recognition should not in any way be a function of NVDA. You start
trying to incorporate too many functions into a program, & it ends up
doing none of them well.

I was hearing back in 98 how voice recognition was going to be the
bee-all & end-all for computers. Transcriptionists were going to lose
their jobs in droves, all computers would type letter-perfect when you
spoke, etc. Could you imagine an office full of cubicles where
everyone was talking to their machines? It'd be a frickin zoo! We are
certainly reaching a point where dictation to one's device is becoming
an increasing reality, but as a sight-impaired computer user, you've
still got to have something that lets you know what's onscreen. Unless
MS decides to bring Narrator beyond the level of Voiceover, or some
sort of artificial eyesight becomes a reality, (& I'd love nothing
more than to see either of those happen), you'll always need something
like NVDA.

On 6/1/18, The Gamages via Groups.Io
<james.gamage=btinternet.com@groups.io> wrote:
Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker

to read a document visually than to hear it read out, the eye can assimilate

information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on

a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we

don’t have much choice, so a mixture of technologies is the way to go, we
use the things that suit our needs and leave others to do the same, I’ve
said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect.  And he so much wants something to be so that he
extrapolates without considering very important factors. Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft.  Which is faster?  A sighted person is not

going to listen to an entire document being read, looking for alterations to

make in a draft nor is he/she going to waste time telling the word processor

to find the phrase, and continue speaking from the stop of the phrase until

he says start to define the end of the phrase, then take some sort of action

such as delete it.  If he wants to delete a phrase, what is the person going

to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage?  The same with copying and pasting from

one document to another,

And such operations are also far more efficient using a keyboard.  I should

add that I haven't used programs that operate a computer with speech. If
I'm wrong, and people who use such programs know I am wrong, I await
correction.  That's how things appear to me.

What about file management?  Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard

and a screen-reader as well.  I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful.  It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want

to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions.  He should take his anger at FS, if he must spend

so much time and energy being angry, and direct it where it belongs.  I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes. How many years

did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do?  They used scripts, which they didn't call
scripts, but apps.  They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through

the Looking Glass, use any word to mean anything desired.

But enough.  I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what

my friend means is that in three years, the computers, etc. will all be done

via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other

apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said

this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.

They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell

the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out

tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John









 

Firstly, if that ever became the day, well and good.

But being opensource I'd like to see nvda continue, there shouldn't just be 1 size fits all.

Next the rest.

Dolphin stuff that may go, but jaws?

Lets see, actually lets not.

I think vfo would sue microsoft or whatever before it died.

Governments have spent so much on it and will continue, jaws is for people with jobs, death is unlikely.

Even if it would stop right now, its not like they go anywhere that fast.

Saying that, dolphin stuff has only just got chrome while everyone else has firefox support for ages.

On 6/2/2018 6:33 AM, Brian's Mail list account via Groups.Io wrote:
However on a different tack, In all formats than windows, the operating systems default screenreader is the only one supported. Will there come a day when Narrator is just so good a third party screenreader will be pointless?
What about some kind of collaboration with the narrator team to make a windows system that just works. I certainly am not so vain to imagine that a system like nvda if it were my creation, should be the only game in town. The issue is, will Jaws and Dolphin survive when nvda and Narrator are free to use, and if Narrator can be helped to get everything right, is there any need for nvda?

Devils advocate but a legitimate question I feel.
IE if Narrator had scripting and could then support any software that windows ran, then the programmers out here could concentrate on the scripting of awkward programs.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.
----- Original Message ----- From: "The Gamages via Groups.Io" <james.gamage=btinternet.com@groups.io>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 4:16 PM
Subject: Re: [nvda] The future of NVDA


Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker
to read a document visually than to hear it read out, the eye can assimilate
information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit  with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on
a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we
don’t have much choice, so a mixture of technologies is the way to go, we
use the things that suit our needs and leave others to do the same, I’ve
said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect.  And he so much wants something to be so that he
extrapolates without considering very important factors.  Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft.  Which is faster?  A sighted person is not
going to listen to an entire document being read, looking for alterations to
make in a draft nor is he/she going to waste time telling the word processor
to find the phrase, and continue speaking from the stop of the phrase until
he says start to define the end of the phrase, then take some sort of action
such as delete it.  If he wants to delete a phrase, what is the person going
to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage?  The same with copying and pasting from
one document to another,

And such operations are also far more efficient using a keyboard. I should
add that I haven't used programs that operate a computer with speech.  If
I'm wrong, and people who use such programs know I am wrong, I await
correction.  That's how things appear to me.

What about file management?  Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard
and a screen-reader as well.  I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful.  It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want
to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions.  He should take his anger at FS, if he must spend
so much time and energy being angry, and direct it where it belongs.  I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes.  How many years
did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do?  They used scripts, which they didn't call
scripts, but apps.  They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through
the Looking Glass, use any word to mean anything desired.

But enough.  I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what
my friend means is that in three years, the computers, etc. will all be done
via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other
apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said
this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.
They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell
the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out
tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John








Brian's Mail list account <bglists@...>
 

I have a story about one user who got so annoyed with the dictation he swore at it, it transcribed all the swear words perfectly of course!

Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Pranav Lal" <pranav.lal@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 5:59 PM
Subject: Re: [nvda] The future of NVDA


Sky,



NVDA already has the ability to use voice commands. See the dictationBridge
add-on from http://dictationbridge.com. Yes, it does require Dragon
Individual Professional or WSR for it to work.



I have used speech-recognition and continue to use it. Speech-recognition is
good for rapid dictation. If I am writing a story, I will dictate it.
However, nothing beats the keyboard when editing. The problem here is the
linear nature of speech output. Sighted users can count how many words to
move forward and say "move forward 10 words." The blind user can also say
the same thing but how do you count words using speech? Speech-recognition
can be incredibly efficient if you have a large number of commands to handle
your tasks. I cannot say if speech will ever replace the keyboard. I doubt
it will for the reasons other users have outlined. The way is to think of
input methods as complimentary.



Long live NVDA and the keyboard and speech-recognition!

Pranav


Brian's Mail list account <bglists@...>
 

Read my comment about jaws and the NHS recently posted. I think in many places bespoke applications are a problem for blind people employed these days, and is probably why jaws is still the one to go for, however much of the problems from it managers is that open source still means insecure to them, sadly.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "erik burggraaf" <erik@erik-burggraaf.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 12:51 PM
Subject: Re: [nvda] The future of NVDA


Speech interfaces for computers have been commercially viable for at least
30 years. However, they're not commercially successful. Even after 30
years, 50 to 100 hours of training is required to get fully accurate voice
dictation. The cost of commercial products is still exorbitantly high,
because the products are built for medical markets where cost is less a
factor. Computers themselves, especially desktop computers, or so complex
that the number of voice commands required to fully use a computer is
astronomically High. Moreover, most people are not comfortable talking to a
computer. Most people in fact are not even comfortable leaving a message on
somebody's voicemail. Just go check your messages. You will hear a lot of
nervous stuttering. I recently conducted a training on Jaws for windows
with dragon and JC. The amount of overhead required to browse the internet
was so high, that the excellent business laptop bought for the purpose
could not keep up. Those 3 products working in conjunction only support
Windows 7, Internet Explorer, and Office 2013. They say it will work with
office 2016, but don't recommend it. So, a user that requires that
interface is left with a legacy operating system n secure browsing and
other system factors.

Not liking vfo is just good sense. Not supporting vfo with your cash
dollars is excellent policy. I'm sorry your friend is carrying a personal
Grudge. It sounds like he has at least some good reason. Dispersion of
light is not a great argument for the future success technology the period
the fact of the matter is, voice dictation is simply not up to the level of
speed, accuracy, and start-up efficiency you can get from a keyboard and
mouse. Even a touch screen is far more efficient. Unless you have no access
to these things because of motor or physical impairment, there's really no
justification for it. Morning

To close off, let me say that I dictated this entire message, with a few
stops to collect my thoughts. For demonstration purposes, I left all of the
mistakes in place, so that you could see what it really looks like. I'm
sure you've seen this before. It just goes to show that the keyboard is
going to be around for quite a long time. Have fun, Erik


On June 1, 2018 12:41:14 AM "Sky Mundell" <skyt@shaw.ca> wrote:

Hello NVDA community. It's Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what
my friend means is that in three years, the computers, etc. will all be done
via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other
apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said
this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.
They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping. This does spell
the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out
tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John




Rosemarie Chavarria
 

Hi, Sky,

 

I doubt that voice commands will come to NVDA. I think the keyboard will still be around for a long time. I don't know if your friend is aware of this but you have to pay to use something like google shop or whatever it's called. I do have google home but I don't use it for shopping because I don't have the money to pay for that service. I'm so sorry that your friend is so angry with VFO.

 

Rosemarie

 

From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Sky Mundell
Sent: Thursday, May 31, 2018 9:41 PM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

 

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@...]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN

 

Hi Sky,

 

I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.

 

I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.

 

Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.

 

Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.

 

There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.

 

Live long and prosper, John

 


Brian's Mail list account <bglists@...>
 

Chuckle..

I don't think it was anything other than a bit of fun.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Ron Canazzi" <aa2vm@roadrunner.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 12:40 PM
Subject: Re: [nvda] The future of NVDA


I also like Star Trek; what of it?



On 6/1/2018 3:45 AM, The Gamages via Groups.Io wrote:
voice commands, fine, but how does your friend check what he has
ordered? just a leap of faith, or a sort of screen reader which tells
him, think about it.
By his closing your friend is a Trekkie, [star trec fan]
Best Regards, Jim.
*From:* Sky Mundell <mailto:skyt@shaw.ca>
*Sent:* Friday, June 01, 2018 5:40 AM
*To:* nvda@nvda.groups.io <mailto:nvda@nvda.groups.io>
*Subject:* [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.
Will NVDA be incorporating voice commands in into the screen reader?
Because a friend of mine has told me that in three years everything is
going to be voice activated. Yes we have dictation bridge for Voice
activation, but what my friend means is that in three years, the
computers, etc. will all be done via Voice activation without a
keyboard. Here is what he has to say.

*From:*bj colt [mailto:bjcolt@blueyonder.co.uk]
*Sent:* Thursday, May 31, 2018 8:12 AM
*To:* Sky Mundell
*Subject:* Re: CSUN

Hi Sky,

I just received an email from my local supermarket. I do an on line
shop there every week. From today I can order it via Alexa, Google
home and other apps using voice only ordering.

I did say this is the way forward. With Amazon and Google competing,
this voice activation is going to be the next huge thing in computing.
I've said this for a while as you know. The next step is using actual
programs/apps via voice activation. Just watch my friend. VFO is
finished, on the way out. They won't be able to compete in an open
market. Not as huge as this one. Just imagine my friend. At the moment
I have my favorites in a shopping list. Think about the key strokes I
need to use to get to them? Then additional items. I have to do a
search of often up to 40 products with a similar name. arrowing down,
tabbing down. Then adding them to my shopping basket. Going through
the dates for delivery and times. Then all the key strokes in using my
card details authorization process. All done with our voice. At least
quarter of the time normally spent shopping This does spell the end of
VFO.

Everything is going to be voice activated in the next 3 years. There
isn't any other way for web developers to go.

Progress sometimes my friend is slow but when it starts, it is like a
high speed jet aircraft. Nothing stands in it's way.

There will be some people who won't change. Or use both methods to
carry out tasks. Now VFO have to utilize jws to act on voice commands.
With Dug in Microsoft. I can see VFO being left thousands of miles
behind. Then when they introduce pay monthly fees. The very fast
extinction of jws and other products will come to a very sudden and
dramatic halt. They may think they have the market share for programs
relating of the blind. They don't any more and they are the ones who
are blind and not us.

Live long and prosper, John

--
They Ask Me If I'm Happy; I say Yes.
They ask: "How Happy are You?"
I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"


Brian's Mail list account <bglists@...>
 

Yes, I've been helping an ECLO recently who has Jaws provided by Access to work, but she is always cursing it when it acts downright stupid for no apparent reason. The problem is that only Jaws has scripts written for the software used inside the NHS. This is at the moment a limitation for nvda.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "bob jutzi" <jutzi1@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 6:41 PM
Subject: Re: [nvda] The future of NVDA


And you end up with a monstrosity like Jaws.
I have Jaws as the result of Window-eyes migration, ans it does have some rathe cool features such as the Researchit tool and convenient OCR which now supports flatbed scanners, however, due to my no longer being employed, among other factors, along with the fact that I've used NVDA for over two years, I will probably stick with NVDA as my primary screen reader. In the time it takes for a full Jaws installation, I can have NVDA up and running. Plus, it supports the software I use very well.

On 6/1/2018 11:58 AM, Jackie wrote:
Here's my $.02. NVDA is designed to do 1 thing & does it reasonably
well, which is to be a *screen reader.* As such, it should certainly
work w/other software that does speech recognition, but speech
recognition should not in any way be a function of NVDA. You start
trying to incorporate too many functions into a program, & it ends up
doing none of them well.

I was hearing back in 98 how voice recognition was going to be the
bee-all & end-all for computers. Transcriptionists were going to lose
their jobs in droves, all computers would type letter-perfect when you
spoke, etc. Could you imagine an office full of cubicles where
everyone was talking to their machines? It'd be a frickin zoo! We are
certainly reaching a point where dictation to one's device is becoming
an increasing reality, but as a sight-impaired computer user, you've
still got to have something that lets you know what's onscreen. Unless
MS decides to bring Narrator beyond the level of Voiceover, or some
sort of artificial eyesight becomes a reality, (& I'd love nothing
more than to see either of those happen), you'll always need something
like NVDA.

On 6/1/18, The Gamages via Groups.Io
<james.gamage=btinternet.com@groups.io> wrote:
Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker

to read a document visually than to hear it read out, the eye can assimilate

information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on

a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we

don’t have much choice, so a mixture of technologies is the way to go, we
use the things that suit our needs and leave others to do the same, I’ve
said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect. And he so much wants something to be so that he
extrapolates without considering very important factors. Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not

going to listen to an entire document being read, looking for alterations to

make in a draft nor is he/she going to waste time telling the word processor

to find the phrase, and continue speaking from the stop of the phrase until

he says start to define the end of the phrase, then take some sort of action

such as delete it. If he wants to delete a phrase, what is the person going

to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage? The same with copying and pasting from

one document to another,

And such operations are also far more efficient using a keyboard. I should

add that I haven't used programs that operate a computer with speech. If
I'm wrong, and people who use such programs know I am wrong, I await
correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard

and a screen-reader as well. I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want

to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions. He should take his anger at FS, if he must spend

so much time and energy being angry, and direct it where it belongs. I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes. How many years

did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through

the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what

my friend means is that in three years, the computers, etc. will all be done

via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other

apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said

this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.

They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell

the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out

tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John






Brian's Mail list account <bglists@...>
 

Note I have had a long experience with Alexa, and have seen under the hood of how it recognises stuff and in some cases does not and its not trivial. Basically the writer of a skill needs to second guess the errors it will hear and try to trigger on the right errors but not the wrong ones. So although its good and has some clever routines built into the cloud operating system, often it is the job of the skill writer to try to make it a natural conversational experience. That is why Siri is so bad as it lacks a lot of this configuration since many apps are not able to access it properly.
However I bring this up merely to point out that the tech is not really good enough yet.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Jackie" <abletec@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 4:58 PM
Subject: Re: [nvda] The future of NVDA


Here's my $.02. NVDA is designed to do 1 thing & does it reasonably
well, which is to be a *screen reader.* As such, it should certainly
work w/other software that does speech recognition, but speech
recognition should not in any way be a function of NVDA. You start
trying to incorporate too many functions into a program, & it ends up
doing none of them well.

I was hearing back in 98 how voice recognition was going to be the
bee-all & end-all for computers. Transcriptionists were going to lose
their jobs in droves, all computers would type letter-perfect when you
spoke, etc. Could you imagine an office full of cubicles where
everyone was talking to their machines? It'd be a frickin zoo! We are
certainly reaching a point where dictation to one's device is becoming
an increasing reality, but as a sight-impaired computer user, you've
still got to have something that lets you know what's onscreen. Unless
MS decides to bring Narrator beyond the level of Voiceover, or some
sort of artificial eyesight becomes a reality, (& I'd love nothing
more than to see either of those happen), you'll always need something
like NVDA.

On 6/1/18, The Gamages via Groups.Io
<james.gamage=btinternet.com@groups.io> wrote:
Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker

to read a document visually than to hear it read out, the eye can assimilate

information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on

a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we

don’t have much choice, so a mixture of technologies is the way to go, we
use the things that suit our needs and leave others to do the same, I’ve
said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect. And he so much wants something to be so that he
extrapolates without considering very important factors. Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not

going to listen to an entire document being read, looking for alterations to

make in a draft nor is he/she going to waste time telling the word processor

to find the phrase, and continue speaking from the stop of the phrase until

he says start to define the end of the phrase, then take some sort of action

such as delete it. If he wants to delete a phrase, what is the person going

to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage? The same with copying and pasting from

one document to another,

And such operations are also far more efficient using a keyboard. I should

add that I haven't used programs that operate a computer with speech. If
I'm wrong, and people who use such programs know I am wrong, I await
correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard

and a screen-reader as well. I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want

to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions. He should take his anger at FS, if he must spend

so much time and energy being angry, and direct it where it belongs. I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes. How many years

did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through

the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what

my friend means is that in three years, the computers, etc. will all be done

via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other

apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said

this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.

They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell

the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out

tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John





--
Remember! Friends Help Friends Be Cybersafe
Jackie McBride
Helping Cybercrime Victims 1 Person at a Time
https://brighter-vision.com


Brian's Mail list account <bglists@...>
 

Yes does there exist a just read add on that works on waterfox. I am beginning to fall in love with this browser, its fast, and better still actually does read the page automatically every time a function firefox edge and ie never seemed to do reliably in nvda.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Brian Vogel" <britechguy@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 5:21 PM
Subject: Re: [nvda] The future of NVDA


On Fri, Jun 1, 2018 at 08:55 am, Holger Fiallo wrote:


The eyes can go over all the place within seconds and look at something
fast. JAWS, NVDA or another program can only go up down or left or right.
It took me a much longer time than it should have to "get" this concept that should be obvious. Joseph Lee turned on the lightbulb in my head by using the phrase that the sighted see things like webpages as a gestalt. We instantly and instinctively "edit out" all irrelevant information and glide our visual focus only to those things we know in advance (or decide in the moment) that we're looking for.

A screen reader, at least with both web coding (for web pages) and other AI "selection" technology being what it is today, has absolutely no way of knowing what the intent of the user is with regard to anything they're looking at. If you're looking at a contract, for instance, via screen reader it has no idea whether you care about "the fine print" or not and must present everything as a result. It can be, and often is, maddening even once you're used to it.

It's also interesting, at least when I'm working with someone who was formerly sighted and we have one of the voice synthesizers that really sounds human, how there is a hesitation to interrupt/cut off the screen reader voice output. I believe this is because we're socialized that it's rude to interrupt and when something sounds sufficiently human as to be indistinguishable (or very nearly so) from same there is a reluctance to cut it off. This is not so pronounced with the more "robotic" voices, but it is still there to some extent. One of the first things I try to drive home is that you can, and must, get used to cutting off the screen reader once you've heard enough to determine that something's not of interest or that you need to do a much more strategic "read through" than just allowing the screen reader to start at the beginning, and with web pages that always includes lots of junk links, and continue through to the end.

I have to say that, in conjunction with screen readers, I am absolutely loving the "Just Read" extension/add-on for Chrome and Firefox, respectively. It does the best job I've seen so far of extracting the text that I, as a sighted person, am looking for when I arrive on a webpage, removing all the links, etc., and just reading it as though I were reading it. It's a far more natural way of listening to most text being read than having a screen reader on default settings read it. You can always go back later to do a check for links, etc., in the screen reader itself if that seems to be warranted.

--

*Brian* *-* Windows 10 Home, 64-Bit, Version 1803, Build 17134

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong.

~ H.L. Mencken , AKA The Sage of Baltimore


Brian's Mail list account <bglists@...>
 

To explain to the sighted I used to get them to look through the middle of a toilet roll tube at the screen. In other words overviews that the sighted get is not easy to achieve conceptually for those who may have never seen a real screen. I did have sight and do understand a lot of it but my sight was bad when the web started and so many concepts of page layout are lost to me, I just see up and down in effect.
Of course the nvda be can help with the layout of screens in applications, but sometimes you feel like hey, I want to be at that last place button thing, but the screen and object nav often will not go there.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Holger Fiallo" <hfiallo@rcn.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 4:55 PM
Subject: Re: [nvda] The future of NVDA


Yes. As a formal sighted person I agreed. The eyes can go over all the place within seconds and look at something fast. JAWS, NVDA or another program can only go up down or left orright.

From: Gene
Sent: Friday, June 1, 2018 10:48 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Thank you. I should clarify a point I made. It is faster to skim a document by sight. But straight reading or listening may be as fast for a blind person. I haven't asked sighted people about this, but I generally listen at about 350 words per minute and I can listen without loss of comprehension, though its more taxing, at about 400 words per minute. Others can listen at faster speeds, I gather, without loss of comprehension. I don't know how taxing faster listening is for those who do so without comprehension loss. I don't know what the average sighted person's speed is of reading a computer screen. The statistic I've heard is that the average reading speed for a sighted person is about 300 words per minute.

But the inefficiency is to try to skim using speech or Braille compared to sight and to edit as well. I don't know how many blind people realize this, but a sighted person can review a document, find something that needs correction, such as a word to be changed or a phrase to be altered, click the mouse wherever he wants to make the change, and thus immediately move the cursor to that place. That is much faster than listening to a document or skimming a document by speech and moving to the place, moving by line, if necessary, then by word, then by character if the edit is not at the immediate beginning of a line or at the very end.

I believe that Braille displays in general, have a feature that allows you to move the cursor to where you are reading and that would be much more efficient than speech so I won't compare Braille movement in editing to sighted people since I don't know enough about it.

I wanted to clarify that straight reading can be done very efficiently by speech or by Braille if the person is good at fast listening or reading. My other comments don't need to be changed.

And keep in mind that I'm discussing working from the keyboard and using a screen-reader for speech in the comments I modified. My comments about using voice commands to do such things are unchanged.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 10:16 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker to read a document visually than to hear it read out, the eye can assimilate information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit with voice commands.
I spent years learning to touch type, I learn from sighted friends and relatives that they mainly use one or two fingers to type onto a keyboard on a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we don’t have much choice, so a mixture of technologies is the way to go, we use the things that suit our needs and leave others to do the same, I’ve said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are highly suspect. And he so much wants something to be so that he extrapolates without considering very important factors. Whatever happens to keyboards, some sort of ability for sighted people to do things on a screen in other means than speech will remain, touch screens, for example. Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not going to listen to an entire document being read, looking for alterations to make in a draft nor is he/she going to waste time telling the word processor to find the phrase, and continue speaking from the stop of the phrase until he says start to define the end of the phrase, then take some sort of action such as delete it. If he wants to delete a phrase, what is the person going to do, move to a passage using speech, mark the start of the passage with speech, then mark the end of the passage with speech then say delete, then say insert and speak a new passage? The same with copying and pasting from one document to another,

And such operations are also far more efficient using a keyboard. I should add that I haven't used programs that operate a computer with speech. If I'm wrong, and people who use such programs know I am wrong, I await correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you want to delete fifteen noncontiguous files in a list of two hundred. Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard and a screen-reader as well. I've mainly discussed sighted users because innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how cumbersome it would be to use speech with a spread sheet to perform any somewhat complex series of operations with a screen-reader and some may want to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of JAWS if the person was misled by his college advisor to learn a screen-reader that has always been a far second in terms of its use in business and institutions. He should take his anger at FS, if he must spend so much time and energy being angry, and direct it where it belongs. I could write paragraphs about why JAWS was dominant, some of it because it got started first in the DOS screen-reader arena, some of it because it built up all sorts of relationships with institutions, and some because it was better for more employment situations than Window-eyes. How many years did Window-eyes refuse to use scripts and limit the functionality of the screen-reader in a stubborn attempt to distinguish itself from JAWS? Finally, what did they do? They used scripts, which they didn't call scripts, but apps. They weren't apps, and language should be respected. Words have meanings and you can't, as one of the carachters does in Through the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered? just a leap of faith, or a sort of screen reader which tells him, think about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.



Live long and prosper, John


Sky Mundell
 

Absolutely. And if we don’t reunite and we’re divided, that causes problems. For example, look at what happened to the romans in the olden days. They didn’t unite, they were divided and look what happened. They got disbanded and destroyed.

 

From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Robin Frost
Sent: Friday, June 01, 2018 11:25 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

 

Hi,

Ron makes a good point. Whenever anyone begins prognostication about the future and such communications are peppered with such enthusiasm for the demise of one or another programs or companies rather than making statements based on factual possibilities it causes their entire argument or point to become suspect in my humble view.

Since many Screen Reading Software venders are working in cooperation with Microsoft and other tech oriented companies more and more of late and since even competing tech giants seem to be collaborating more and more on things like HID standards for connections to braille displays for instance whatever the future holds I’m certain that they and we as a community of users will find means to keep moving forward and with diligence, forethought and polite advocacy when necessary will retain the strides we’ve made regarding accessibility and hopefully gain even more ground therein.

Robin

 

 

From: Ron Canazzi

Sent: Friday, June 1, 2018 3:13 AM

To: nvda@nvda.groups.io

Subject: Re: [nvda] The future of NVDA

 

I am not qualified enough to comment on this subject, but I'll say one thing for the author. He seems to be filled with glee over the prospect of the demise of VFO FKA Freedom Scientific.

 

 

On 6/1/2018 12:40 AM, Sky Mundell wrote:

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@...]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN

 

Hi Sky,

 

I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.

 

I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.

 

Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.

 

Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.

 

There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.

 

Live long and prosper, John

 



-- 
They Ask Me If I'm Happy; I say Yes.
They ask: "How Happy are You?"
I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"


Ervin, Glenn
 

Have anyone here ever tried doing acronyms on an iPhone when doing a voice text?
It does not work well.
Glenn

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Brian's Mail list account via Groups.Io
Sent: Friday, June 01, 2018 1:27 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Actually, I do not think we are there yet to entrust things to speech. Also
there is literacy. The argument for brailed is literacy, and thus if we talk
all input the actual text will maybe sound sort of right but nobody will be
able to spell, and worse how could you do a search?


No despite the allure of the idea, there are certain practicalities that
have to be considered here.
Its the same argument that some give me to suggest emogees and text speak
will replace all languages. No they won't simply due to the different
concepts of things inside peoples heads.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.
----- Original Message -----
From: "Gene" <gsasner@ripco.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 11:43 AM
Subject: Re: [nvda] The future of NVDA


Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect. And he so much wants something to be so that he
extrapolates without considering very important factors. Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not
going to listen to an entire document being read, looking for alterations to
make in a draft nor is he/she going to waste time telling the word processor
to find the phrase, and continue speaking from the stop of the phrase until
he says start to define the end of the phrase, then take some sort of action
such as delete it. If he wants to delete a phrase, what is the person going
to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage? The same with copying and pasting from
one document to another,

And such operations are also far more efficient using a keyboard. I should
add that I haven't used programs that operate a computer with speech. If
I'm wrong, and people who use such programs know I am wrong, I await
correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard
and a screen-reader as well. I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want
to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions. He should take his anger at FS, if he must spend
so much time and energy being angry, and direct it where it belongs. I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes. How many years
did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through
the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA


voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what
my friend means is that in three years, the computers, etc. will all be done
via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other
apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said
this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.
They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell
the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out
tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John


Brian's Mail list account <bglists@...>
 

However on a different tack, In all formats than windows, the operating systems default screenreader is the only one supported. Will there come a day when Narrator is just so good a third party screenreader will be pointless?
What about some kind of collaboration with the narrator team to make a windows system that just works. I certainly am not so vain to imagine that a system like nvda if it were my creation, should be the only game in town. The issue is, will Jaws and Dolphin survive when nvda and Narrator are free to use, and if Narrator can be helped to get everything right, is there any need for nvda?

Devils advocate but a legitimate question I feel.
IE if Narrator had scripting and could then support any software that windows ran, then the programmers out here could concentrate on the scripting of awkward programs.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "The Gamages via Groups.Io" <james.gamage=btinternet.com@groups.io>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 4:16 PM
Subject: Re: [nvda] The future of NVDA


Hello Gene,

You are so correct, having been a sighted person, I agree, it is far quicker
to read a document visually than to hear it read out, the eye can assimilate
information far beyond the capabilities of the ear and far quicker.
You also explain vividly the nightmare of trying to edit with voice
commands.
I spent years learning to touch type, I learn from sighted friends and
relatives that they mainly use one or two fingers to type onto a keyboard on
a touch screen, progress?

Like most things, we are stuck with voice output to read things, as blind we
don’t have much choice, so a mixture of technologies is the way to go, we
use the things that suit our needs and leave others to do the same, I’ve
said it before, Long live NVDA.

Best Regards, Jim.

From: Gene
Sent: Friday, June 01, 2018 11:43 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

Your friend is so biased that his opinions about Window-eyes and JAWS are
highly suspect. And he so much wants something to be so that he
extrapolates without considering very important factors. Whatever happens
to keyboards, some sort of ability for sighted people to do things on a
screen in other means than speech will remain, touch screens, for example.
Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not
going to listen to an entire document being read, looking for alterations to
make in a draft nor is he/she going to waste time telling the word processor
to find the phrase, and continue speaking from the stop of the phrase until
he says start to define the end of the phrase, then take some sort of action
such as delete it. If he wants to delete a phrase, what is the person going
to do, move to a passage using speech, mark the start of the passage with
speech, then mark the end of the passage with speech then say delete, then
say insert and speak a new passage? The same with copying and pasting from
one document to another,

And such operations are also far more efficient using a keyboard. I should
add that I haven't used programs that operate a computer with speech. If
I'm wrong, and people who use such programs know I am wrong, I await
correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you
want to delete fifteen noncontiguous files in a list of two hundred.
Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard
and a screen-reader as well. I've mainly discussed sighted users because
innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace
visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how
cumbersome it would be to use speech with a spread sheet to perform any
somewhat complex series of operations with a screen-reader and some may want
to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of
JAWS if the person was misled by his college advisor to learn a
screen-reader that has always been a far second in terms of its use in
business and institutions. He should take his anger at FS, if he must spend
so much time and energy being angry, and direct it where it belongs. I
could write paragraphs about why JAWS was dominant, some of it because it
got started first in the DOS screen-reader arena, some of it because it
built up all sorts of relationships with institutions, and some because it
was better for more employment situations than Window-eyes. How many years
did Window-eyes refuse to use scripts and limit the functionality of the
screen-reader in a stubborn attempt to distinguish itself from JAWS?
Finally, what did they do? They used scripts, which they didn't call
scripts, but apps. They weren't apps, and language should be respected.
Words have meanings and you can't, as one of the carachters does in Through
the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I
have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA

voice commands, fine, but how does your friend check what he has ordered?
just a leap of faith, or a sort of screen reader which tells him, think
about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will
NVDA be incorporating voice commands in into the screen reader? Because a
friend of mine has told me that in three years everything is going to be
voice activated. Yes we have dictation bridge for Voice activation, but what
my friend means is that in three years, the computers, etc. will all be done
via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop
there every week. From today I can order it via Alexa, Google home and other
apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this
voice activation is going to be the next huge thing in computing. I've said
this for a while as you know. The next step is using actual programs/apps
via voice activation. Just watch my friend. VFO is finished, on the way out.
They won't be able to compete in an open market. Not as huge as this one.
Just imagine my friend. At the moment I have my favorites in a shopping
list. Think about the key strokes I need to use to get to them? Then
additional items. I have to do a search of often up to 40 products with a
similar name. arrowing down, tabbing down. Then adding them to my shopping
basket. Going through the dates for delivery and times. Then all the key
strokes in using my card details authorization process. All done with our
voice. At least quarter of the time normally spent shopping This does spell
the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't
any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high
speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out
tasks. Now VFO have to utilize jws to act on voice commands. With Dug in
Microsoft. I can see VFO being left thousands of miles behind. Then when
they introduce pay monthly fees. The very fast extinction of jws and other
products will come to a very sudden and dramatic halt. They may think they
have the market share for programs relating of the blind. They don't any
more and they are the ones who are blind and not us.



Live long and prosper, John


Brian's Mail list account <bglists@...>
 

Actually, I do not think we are there yet to entrust things to speech. Also there is literacy. The argument for brailed is literacy, and thus if we talk all input the actual text will maybe sound sort of right but nobody will be able to spell, and worse how could you do a search?


No despite the allure of the idea, there are certain practicalities that have to be considered here.
Its the same argument that some give me to suggest emogees and text speak will replace all languages. No they won't simply due to the different concepts of things inside peoples heads.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal E-mail to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Gene" <gsasner@ripco.com>
To: <nvda@nvda.groups.io>
Sent: Friday, June 01, 2018 11:43 AM
Subject: Re: [nvda] The future of NVDA


Your friend is so biased that his opinions about Window-eyes and JAWS are highly suspect. And he so much wants something to be so that he extrapolates without considering very important factors. Whatever happens to keyboards, some sort of ability for sighted people to do things on a screen in other means than speech will remain, touch screens, for example. Consider some examples:

Consider reviewing a rough draft. Which is faster? A sighted person is not going to listen to an entire document being read, looking for alterations to make in a draft nor is he/she going to waste time telling the word processor to find the phrase, and continue speaking from the stop of the phrase until he says start to define the end of the phrase, then take some sort of action such as delete it. If he wants to delete a phrase, what is the person going to do, move to a passage using speech, mark the start of the passage with speech, then mark the end of the passage with speech then say delete, then say insert and speak a new passage? The same with copying and pasting from one document to another,

And such operations are also far more efficient using a keyboard. I should add that I haven't used programs that operate a computer with speech. If I'm wrong, and people who use such programs know I am wrong, I await correction. That's how things appear to me.

What about file management? Consider using speech to tell a computer you want to delete fifteen noncontiguous files in a list of two hundred. Consider how you might do it with speech as opposed to using a keyboard.

And considerations of speed and efficiency are true when using the keyboard and a screen-reader as well. I've mainly discussed sighted users because innovations are developed for sighted users.

Speech will become increasingly popular and powerful. It won't replace visual access and manipulation in computers.

I don't use spread sheets but I expect those who do may point out how cumbersome it would be to use speech with a spread sheet to perform any somewhat complex series of operations with a screen-reader and some may want to comment on the visual comparison..

As for JAWS versus Window-eyes, I won't say much but it's not the fault of JAWS if the person was misled by his college advisor to learn a screen-reader that has always been a far second in terms of its use in business and institutions. He should take his anger at FS, if he must spend so much time and energy being angry, and direct it where it belongs. I could write paragraphs about why JAWS was dominant, some of it because it got started first in the DOS screen-reader arena, some of it because it built up all sorts of relationships with institutions, and some because it was better for more employment situations than Window-eyes. How many years did Window-eyes refuse to use scripts and limit the functionality of the screen-reader in a stubborn attempt to distinguish itself from JAWS? Finally, what did they do? They used scripts, which they didn't call scripts, but apps. They weren't apps, and language should be respected. Words have meanings and you can't, as one of the carachters does in Through the Looking Glass, use any word to mean anything desired.

But enough. I'll leave the discussion to others from this point unless I have something additional to add.

Gene
----- Original Message -----
From: The Gamages via Groups.Io
Sent: Friday, June 01, 2018 2:45 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA


voice commands, fine, but how does your friend check what he has ordered? just a leap of faith, or a sort of screen reader which tells him, think about it.

By his closing your friend is a Trekkie, [star trec fan]


Best Regards, Jim.

From: Sky Mundell
Sent: Friday, June 01, 2018 5:40 AM
To: nvda@nvda.groups.io
Subject: [nvda] The future of NVDA

Hello NVDA community. It’s Sky. I wanted to ask you guys a question. Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@blueyonder.co.uk]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN



Hi Sky,



I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.



I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping This does spell the end of VFO.



Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.



Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.



There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.



Live long and prosper, John


Robin Frost
 

Hi,
Ron makes a good point. Whenever anyone begins prognostication about the future and such communications are peppered with such enthusiasm for the demise of one or another programs or companies rather than making statements based on factual possibilities it causes their entire argument or point to become suspect in my humble view.
Since many Screen Reading Software venders are working in cooperation with Microsoft and other tech oriented companies more and more of late and since even competing tech giants seem to be collaborating more and more on things like HID standards for connections to braille displays for instance whatever the future holds I’m certain that they and we as a community of users will find means to keep moving forward and with diligence, forethought and polite advocacy when necessary will retain the strides we’ve made regarding accessibility and hopefully gain even more ground therein.
Robin
 
 

From: Ron Canazzi
Sent: Friday, June 1, 2018 3:13 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] The future of NVDA
 

I am not qualified enough to comment on this subject, but I'll say one thing for the author. He seems to be filled with glee over the prospect of the demise of VFO FKA Freedom Scientific.

 


On 6/1/2018 12:40 AM, Sky Mundell wrote:

Hello NVDA community. It’s Sky. I wanted to ask you guys a question.  Will NVDA be incorporating voice commands in into the screen reader? Because a friend of mine has told me that in three years everything is going to be voice activated. Yes we have dictation bridge for Voice activation, but what my friend means is that in three years, the computers, etc. will all be done via Voice activation without a keyboard. Here is what he has to say.

From: bj colt [mailto:bjcolt@...]
Sent: Thursday, May 31, 2018 8:12 AM
To: Sky Mundell
Subject: Re: CSUN

 

Hi Sky,

 

I just received an email from my local supermarket. I do an on line shop there every week. From today I can order it via Alexa, Google home and other apps using voice only ordering.

 

I did say this is the way forward. With Amazon and Google competing, this voice activation is going to be the next huge thing in computing. I've said this for a while as you know. The next step is using actual programs/apps via voice activation. Just watch my friend. VFO is finished, on the way out. They won't be able to compete in an open market. Not as huge as this one. Just imagine my friend. At the moment I have my favorites in a shopping list. Think about the key strokes I need to use to get to them? Then additional items. I have to do a search of often up to 40 products with a similar name. arrowing down, tabbing down. Then adding them to my shopping basket. Going through the dates for delivery and times. Then all the key strokes in using my card details authorization process. All done with our voice. At least quarter of the time normally spent shopping. This does spell the end of VFO.

 

Everything is going to be voice activated in the next 3 years. There isn't any other way for web developers to go.

 

Progress sometimes my friend is slow but when it starts, it is like a high speed jet aircraft. Nothing stands in it's way.

 

There will be some people who won't change. Or use both methods to carry out tasks. Now VFO have to utilize jws to act on voice commands. With Dug in Microsoft. I can see VFO being left thousands of miles behind. Then when they introduce pay monthly fees. The very fast extinction of jws and other products will come to a very sudden and dramatic halt. They may think they have the market share for programs relating of the blind. They don't any more and they are the ones who are blind and not us.

 

Live long and prosper, John

 


-- 
They Ask Me If I'm Happy; I say Yes.
They ask: "How Happy are You?"
I Say: "I'm as happy as a stow away chimpanzee on a banana boat!"