Date   

Re: Ctrl+Alt+N in Google Docs

 

On Tue, Jan 12, 2021 at 07:08 PM, Sascha Cowley wrote:
This isn't an edge case; Alt+Ctrl+N is *NOT* an NVDA keyboard shortcut; it's a Windows one.
-
Yep.  It's a Windows keyboard shortcut that fires up NVDA.

And Windows is what gets to see any keystroke sequence first, and if it's a defined shortcut it will process it.  Only if it's not recognized by Windows will it get passed along to the accessibility software (screen reader), and only if it's not recognized by that application will it get passed on to the next thing (Google Docs app under the browser, in this case).

You would have to change the Windows Shortcut so that CTRL+ALT+N would be passed to NVDA, which would ignore it, and it would then fall through to the browser, then to Google Docs running in the browser.
 
--

Brian - Windows 10 Pro, 64-Bit, Version 20H2, Build 19042  

The depths of denial one can be pushed to by outside forces of disapproval can make you not even recognize yourself to yourself.

       ~ Brian Vogel

 


more Zoom issues

Bob Cavanaugh <cavbob1993@...>
 

Hi all,
So, since Zoom has come up on the list twice today, I figured I'd
throw in something else. About three weeks ago, someone wrote to the
list saying that NVDA does not read all participants in the list, and
I did quite a bit of testing on this. The last we heard however, was
that it appeared to have been fixed in the latest version of Zoom. I
checked in a call the other day though, and it still acts the same
way. How do I check if there is an update for Zoom to see if this has
in fact been fixed? Zoom hasn't tried to update to my knowledge in
quite some time, though it might happen in the background.
Bob


excel office365 with NVDA 2020.3 questions

Josh Kennedy
 

Hi

I was making a spreadsheet earlier and wanted to know the font and color of the cells. I am using a laptop. When I pressed capslock f, it just says no caret, it does not give me font and color and atribute information about the current cell or selected cells. How do I get this information on-demand with NVDA? Also I successfully moved a chart to a new worksheet. But how do I change the position of the chart’s labels so it prints better with the tiger addon for excel in combination with the braille buddy embosser? I just did a test with word, I can get font and color info just fine in word, not in excel though for some reason.

 

Josh

 

 

Sent from Mail for Windows 10

 


Re: Lowering Hand on Zoom Client

Sam Bushman
 

When you press alt-y does it announce if the hand is up or down?

-----Original Message-----
From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Lino Morales
Sent: Tuesday, January 12, 2021 5:21 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Lowering Hand on Zoom Client

I don't bother using said button. I use the shortcut key alt Y. That's a toggle.

On 1/12/2021 7:17 PM, Bhavya shah wrote:
Dear all,

I am using the latest versions of NVDA, Zoom client and Windows 10.
Pressing the Raise Hand button on the Zoom client is not an issue.
However, lowering it is what gets tricky. This is because even once my
hand is raised, the Raise Hand button still reads as Raise Hand button
(unlike on the web interface of Zoom where it morphs into the Lower
Hand button). Simply pressing the button again to lower my hand is a
problematic approach because (a) I may not remember the state of my
hand and (b) some hosts acknowledge hands and lower it which is an
announcement NVDA reads but I may miss in focusing on the Zoom
meeting's audio. With regards to this accessibility issue, I was
wondering if this has been reported to Zoom or screen reader vendors
already and if a temporary fix can be issued through an add-on,
particularly by an update to the awesome Zoom Enhancements add-on.

One alternative strategy that I intend to test the next time I get on
to a Zoom meeting is to check if a raised hand is noted in the entry
corresponding to my name in the Participants list. I know a lowered
hand is not, but even if a raised hand is, I would be able to
conclusively determine the state of my hand.

If you know of other ways to access this information so that I know
the state of my virtual hand and can alter it as necessary, please do
share. Any and all tips and hacks are much appreciated.

Thanks.


Re: Will NVDA eventually use AI for better GUI recognition?

Pranav Lal
 

Hi all,

As others have mentioned, the problem is the model. We need someone to compile data on what user interfaces look like, probably at a control level and then determine how to interact with it.

Suppose there is a non-standard edit box, NVDA would need to know that there is an edit box and then would also need to interact with it. It may not get the relevant events from it so it is hard to say how good such a feature would be but this is very possible to do. Does anyone have a database of pictures of windows controls?


Pranav


locked Chicken Nugget (accessible twitter client) 4.71 released

Cristóbal
 

Hello list,

As the topics of Twitter and accessible Twitter clients come up every now and then, a heads up As the subject states, a new Chicken Nuget was just put out.

https://getaccessibleapps.com/chicken_nugget/download

Seems to have bug fixes and now supports alt text and other stuff.

 

Cristóbal

 


Re: Lowering Hand on Zoom Client

Lino Morales
 

I don't bother using said button. I use the shortcut key alt Y. That's a toggle.

On 1/12/2021 7:17 PM, Bhavya shah wrote:
Dear all,

I am using the latest versions of NVDA, Zoom client and Windows 10.
Pressing the Raise Hand button on the Zoom client is not an issue.
However, lowering it is what gets tricky. This is because even once my
hand is raised, the Raise Hand button still reads as Raise Hand button
(unlike on the web interface of Zoom where it morphs into the Lower
Hand button). Simply pressing the button again to lower my hand is a
problematic approach because (a) I may not remember the state of my
hand and (b) some hosts acknowledge hands and lower it which is an
announcement NVDA reads but I may miss in focusing on the Zoom
meeting's audio. With regards to this accessibility issue, I was
wondering if this has been reported to Zoom or screen reader vendors
already and if a temporary fix can be issued through an add-on,
particularly by an update to the awesome Zoom Enhancements add-on.

One alternative strategy that I intend to test the next time I get on
to a Zoom meeting is to check if a raised hand is noted in the entry
corresponding to my name in the Participants list. I know a lowered
hand is not, but even if a raised hand is, I would be able to
conclusively determine the state of my hand.

If you know of other ways to access this information so that I know
the state of my virtual hand and can alter it as necessary, please do
share. Any and all tips and hacks are much appreciated.

Thanks.


Lowering Hand on Zoom Client

Bhavya shah
 

Dear all,

I am using the latest versions of NVDA, Zoom client and Windows 10.
Pressing the Raise Hand button on the Zoom client is not an issue.
However, lowering it is what gets tricky. This is because even once my
hand is raised, the Raise Hand button still reads as Raise Hand button
(unlike on the web interface of Zoom where it morphs into the Lower
Hand button). Simply pressing the button again to lower my hand is a
problematic approach because (a) I may not remember the state of my
hand and (b) some hosts acknowledge hands and lower it which is an
announcement NVDA reads but I may miss in focusing on the Zoom
meeting's audio. With regards to this accessibility issue, I was
wondering if this has been reported to Zoom or screen reader vendors
already and if a temporary fix can be issued through an add-on,
particularly by an update to the awesome Zoom Enhancements add-on.

One alternative strategy that I intend to test the next time I get on
to a Zoom meeting is to check if a raised hand is noted in the entry
corresponding to my name in the Participants list. I know a lowered
hand is not, but even if a raised hand is, I would be able to
conclusively determine the state of my hand.

If you know of other ways to access this information so that I know
the state of my virtual hand and can alter it as necessary, please do
share. Any and all tips and hacks are much appreciated.

Thanks.

--
Best Regards
Bhavya Shah
Stanford University | Class of 2024
E-mail Address: bhavya.shah125@gmail.com
LinkedIn: https://www.linkedin.com/in/bhavyashah125/


Using tracking changes in Word and NVDA

Janet Brandly
 

Hello all,

 

I am doing a course assignment which involves editing documents produced by speech recognition software. I am to use tracking changes in Word to show changes I have made. Would someone out there be able to talk me through how to use tracking changes in Word with the latest stable version of NVDA and Windows ten?

 

Thanks,

 

Janet


Re: Ctrl+Alt+N in Google Docs

Sascha Cowley
 

This isn't an edge case; Alt+Ctrl+N is *NOT* an NVDA keyboard shortcut; it's a Windows one. I'm not sure if NVDA could somehow intercept it and stop it triggering the shortcut, but even if it could, this would likely be a bad idea, as, in the case of NVDA playing up, it may prevent the user from restarting NVDA without restarting the whole system.


Ctrl+Alt+N in Google Docs

Bhavya shah
 

Dear all,

Google Docs has a number of keyboard shortcuts requiring Ctrl+Alt+N to
be pressed. You may have guessed where I am going with this, but here
are a few specific remarks in this context:
* I press NVDA+F2 so that the next keyboard shortcut I press goes
straight to Google Docs without the interference of NVDA. That doesn't
work and NVDA+F2 followed by Ctrl+Alt+N still restarts NVDA. I
understand why this edge case might have been missed, but I was
wondering if this this issue has already been documented on the GitHub
tracker?
* I realize that I can just alter the hot key to launch NVDA from the
Properties dialog corresponding to the NVDA Desktop icon for instance.
But is there any cleaner alternative approach to this problem?

I would greatly appreciate any assistance with the above.

Thanks.

--
Best Regards
Bhavya Shah
Stanford University | Class of 2024
E-mail Address: bhavya.shah125@gmail.com
LinkedIn: https://www.linkedin.com/in/bhavyashah125/


Re: Will NVDA eventually use AI for better GUI recognition?

Devin Prater
 

OCR wasn't all that great 10 years ago. Now, it's very usable. I expect the same to be true of screen/user interface recognition years from now as well.

On 1/12/21 1:39 PM, Luke Robinett wrote:
Sean,
I’m not fully on board with your assessment. Machine learning isn’t a static thing; by its very definition, you feed it content in a particular domain area and it gets better and better at what it does. Just because something is still in its infancy doesn’t mean it isn’t worth pursuing. If that were the attitude, we wouldn’t have screen readers in the first place. Besides, much like current OCR capabilities, this wouldn’t be a replacement for how NVDA normally works – just another supplement. and like I said, it’s already being implemented quite successfully in voiceover, so the argument that the tech just isn’t reliable enough yet kind of goes out the window. That doesn’t mean it’s perfect, but when has anything about accessibility ever been perfect? LOL.

On Jan 12, 2021, at 1:32 AM, Sean Murphy <mhysnm1964@...> wrote:



AI is not that smart as it is all dependent on the data used to train it. Thus all object recognition AI has someone who has applied a specific description to icons, a part of an image, etc. An example where this breaks is if the developer does not use standard icons to indicate navigation like backwards, forward, okay, etc. The user will get some description from the Object recognition and might not be meaningful due to the description provided. The term used in other countries for that description also might not be known:

Lets say an icon is used to indicate to the user that this is the delete button. The AI Object OCR detects this as a trash can. Outside the USA the term trash can is not used. So the user might not understand this is the delete button. The delete button is tied to some important bit of information. They press it to try and work out what it does. Oops, that information is gone. The object recognition then has to know what country your device is using and then use that’s countries term for a rubbish bin (Australian term for trash can). This again assumes this icon is a delete button. There is no guarantee the designer of that app will use internationally standard icons. Yes, there is an organisation that defines all the icons, emoticons, etc.

 

Outside simple standardised icons. Object recognition has a long way to go and requires far more research. I do have the hope it will be able to handle complex images like organisation graphs, etc.

 

Note, the above is just an example demonstrating AI is not a silver bullet. It does help, but I have seen situations on Apple where it has caused more problems than it fixes. Especially if the button, link, etc already has an accessible name which VoiceOver reads out. The Apple solution still gives you the object recognition icon which makes things to verbose. Before it is introduced into any screen reader. Real careful thinking needs to be taken into consideration how it is adopted. As I can see it becoming more of an annoyance than a solution.

 

 

I am not saying it should not be explored to see how it will help. One area is AI dynamically changing the UI of a program to make it easier for a keyboard user. This is one thing that people are already doing work on.

 

Sean

 

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Luke Robinett
Sent: Tuesday, January 12, 2021 10:49 AM
To: nvda@nvda.groups.io
Subject: [nvda] Will NVDA eventually use AI for better GUI recognition?

 

So one thing I enjoy about VoiceOver on my iPhone is it has gotten really good at using UI to make otherwise inaccessible UI elements available to interact with. More than just simple OCR, it can ascertain the layout and make educated guesses about controls like buttons and tabs, greatly expanding the usability of apps that otherwise would be partially or totally inaccessible.

Is there any chance NVDA will eventually reach that level of sophistication? I know there are third party add-ons that attempt to bridge that gap for specific types of apps, for example the great Sibiac add-on which helps make certain music production apps and plugins accessible with NVDA, but it would be great to see these capabilities broadened and rolled into the core functionality of the product.

 

Thanks,

Luke

 


Re: using NVDA with Bluetooth

Dave Grossoehme
 

Good Afternoon:  Have you tried using Narrator when using your bluetooth to see if this is a windows problem or NVDA Problem?

Dave

On 1/6/2021 3:58 PM, Luke Robinett wrote:
Sorry, I should have been clearer. The AirPods connect via Bluetooth. They connect fine, it’s just that slight delay and truncating of the first part of NVDA speech that makes them unusable for me. Would love if this has been fixed somehow. Thanks.

On Jan 6, 2021, at 2:57 PM, Luke Robinett via groups.io <blindgroupsluke=gmail.com@groups.io> wrote:

Hey Bob and others,
I’ve tried several times in the recent past to use my Apple AirPods with my PC and NVDA but it never works very well. There’s a bit of lag and the first bit of whatever NVDA speaks gets chopped off. Has this since been fixed, either in windows or NVDA?

On Jan 1, 2021, at 8:58 PM, Bob Cavanaugh <cavbob1993@gmail.com> wrote:

Ah, not sure how I missed this until this morning, but just tried it
and it works brilliantly! Now all I have to decide is which output to
stream to the speaker, Zara or NVDA. I think this is going to work
though. Now all I need is a test server to put into Alta Cast so I can
try this out.

On 12/31/20, Jackie <abletec@gmail.com> wrote:
Bob, please try going to your synthesizer settings (control+NVDA+s),
select your synthesizer, & set your desired audio output device there.

On 12/31/20, Bob Cavanaugh <cavbob1993@gmail.com> wrote:
Hi everyone,
It was a post on this list that inspired me to try using NVDA with my
Bluetooth speaker, and to my surprise, it actually worked. My question
is, can I somehow tell NVDA and only NVDA to output to the Bluetooth
speaker, and have everything else still come through the computer
speakers?
Bob





--
Subscribe to a WordPress for Newbies Mailing List by sending a message to:
wp4newbs-request@freelists.org with 'subscribe' in the Subject field OR by
visiting the list page at http://www.freelists.org/list/wp4newbs
& check out my sites at www.brightstarsweb.com & www.mysitesbeenhacked.com











Re: possible issue with NVDA on the Windows 10 log-on screen

Rob Hudson
 

No idea. That one is weird.

----- Original Message -----
From: "Bob Cavanaugh" <cavbob1993@gmail.com>
To: nvda@nvda.groups.io
Date: Tue, 12 Jan 2021 11:17:41 -0800
Subject: Re: [nvda] possible issue with NVDA on the Windows 10 log-on screen

But shouldn't hitting tab do the same thing? That's what I don't understand.


Re: Will NVDA eventually use AI for better GUI recognition?

Luke Robinett <blindgroupsluke@...>
 

Sean,
I’m not fully on board with your assessment. Machine learning isn’t a static thing; by its very definition, you feed it content in a particular domain area and it gets better and better at what it does. Just because something is still in its infancy doesn’t mean it isn’t worth pursuing. If that were the attitude, we wouldn’t have screen readers in the first place. Besides, much like current OCR capabilities, this wouldn’t be a replacement for how NVDA normally works – just another supplement. and like I said, it’s already being implemented quite successfully in voiceover, so the argument that the tech just isn’t reliable enough yet kind of goes out the window. That doesn’t mean it’s perfect, but when has anything about accessibility ever been perfect? LOL.

On Jan 12, 2021, at 1:32 AM, Sean Murphy <mhysnm1964@...> wrote:



AI is not that smart as it is all dependent on the data used to train it. Thus all object recognition AI has someone who has applied a specific description to icons, a part of an image, etc. An example where this breaks is if the developer does not use standard icons to indicate navigation like backwards, forward, okay, etc. The user will get some description from the Object recognition and might not be meaningful due to the description provided. The term used in other countries for that description also might not be known:

Lets say an icon is used to indicate to the user that this is the delete button. The AI Object OCR detects this as a trash can. Outside the USA the term trash can is not used. So the user might not understand this is the delete button. The delete button is tied to some important bit of information. They press it to try and work out what it does. Oops, that information is gone. The object recognition then has to know what country your device is using and then use that’s countries term for a rubbish bin (Australian term for trash can). This again assumes this icon is a delete button. There is no guarantee the designer of that app will use internationally standard icons. Yes, there is an organisation that defines all the icons, emoticons, etc.

 

Outside simple standardised icons. Object recognition has a long way to go and requires far more research. I do have the hope it will be able to handle complex images like organisation graphs, etc.

 

Note, the above is just an example demonstrating AI is not a silver bullet. It does help, but I have seen situations on Apple where it has caused more problems than it fixes. Especially if the button, link, etc already has an accessible name which VoiceOver reads out. The Apple solution still gives you the object recognition icon which makes things to verbose. Before it is introduced into any screen reader. Real careful thinking needs to be taken into consideration how it is adopted. As I can see it becoming more of an annoyance than a solution.

 

 

I am not saying it should not be explored to see how it will help. One area is AI dynamically changing the UI of a program to make it easier for a keyboard user. This is one thing that people are already doing work on.

 

Sean

 

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Luke Robinett
Sent: Tuesday, January 12, 2021 10:49 AM
To: nvda@nvda.groups.io
Subject: [nvda] Will NVDA eventually use AI for better GUI recognition?

 

So one thing I enjoy about VoiceOver on my iPhone is it has gotten really good at using UI to make otherwise inaccessible UI elements available to interact with. More than just simple OCR, it can ascertain the layout and make educated guesses about controls like buttons and tabs, greatly expanding the usability of apps that otherwise would be partially or totally inaccessible.

Is there any chance NVDA will eventually reach that level of sophistication? I know there are third party add-ons that attempt to bridge that gap for specific types of apps, for example the great Sibiac add-on which helps make certain music production apps and plugins accessible with NVDA, but it would be great to see these capabilities broadened and rolled into the core functionality of the product.

 

Thanks,

Luke

 


Re: Will NVDA eventually use AI for better GUI recognition?

Luke Robinett <blindgroupsluke@...>
 

Modern python is just as capable a language as Java, C++, JavaScript and others. Also, you can write modules in C for NVDA if I’m not mistaken. I believe some of the application‘s core is written in that language.

On Jan 11, 2021, at 5:01 PM, Rob Hudson <rob_hudson3182@opopanax.net> wrote:

Yeah all I know is that it's a scripting language. Didn't know how deep it got into machine learning fuzzy logic and AI>

----- Original Message -----
From: "Jaffar Sidek" <jaffar.sidek10@gmail.com>
To: nvda@nvda.groups.io
Date: Tue, 12 Jan 2021 08:55:24 +0800
Subject: Re: [nvda] Will NVDA eventually use AI for better GUI recognition?

You can do Machine learning with python, so It may not be imposible to
implement AI.

On 12/1/2021 8:29 am, Rob Hudson wrote:
Is that capability even available in python.

----- Original Message -----
From: "Supanut Leepaisomboon" <supanut2000@outlook.com>
To: nvda@nvda.groups.io
Date: Mon, 11 Jan 2021 16:18:00 -0800
Subject: Re: [nvda] Will NVDA eventually use AI for better GUI recognition?

That is actually a great idea, I can imagine this capability being used to make PC games more accessible with NVDA.
















Re: Will NVDA eventually use AI for better GUI recognition?

Luke Robinett <blindgroupsluke@...>
 

Rod, it for sure is. In fact, python is touted as one of the better languages for machine learning.

On Jan 11, 2021, at 4:30 PM, Rob Hudson <rob_hudson3182@opopanax.net> wrote:

Is that capability even available in python.

----- Original Message -----
From: "Supanut Leepaisomboon" <supanut2000@outlook.com>
To: nvda@nvda.groups.io
Date: Mon, 11 Jan 2021 16:18:00 -0800
Subject: Re: [nvda] Will NVDA eventually use AI for better GUI recognition?

That is actually a great idea, I can imagine this capability being used to make PC games more accessible with NVDA.










Re: possible issue with NVDA on the Windows 10 log-on screen

Bob Cavanaugh <cavbob1993@...>
 

But shouldn't hitting tab do the same thing? That's what I don't understand.

On 1/12/21, Rob Hudson <rob_hudson3182@opopanax.net> wrote:
Well, to fire up narrator, you hit a keystroke, which dismisses the lock
screen.
There is a way to turn it off:
Type windows+r and then
gpedit.msc
You'll end up in a tree view
Find administrative templates and hit right arrow
Find control panel and hit right arrow
Find personalization and hit tab
Activate the option that says
Do not display the lock screen
Once you have that off, your login prompt should show up normally.
----- Original Message -----
From: "Bob Cavanaugh" <cavbob1993@gmail.com>
To: nvda@nvda.groups.io
Date: Tue, 12 Jan 2021 10:57:13 -0800
Subject: Re: [nvda] possible issue with NVDA on the Windows 10 log-on
screen

I am on version 2004, build 19041.685. I don't understand why the lock
screen would be a problem, as before I fire up Narrator, I try
pressing tab a few times, and this happens so rarely I usually don't
worry about it, and every time I fire up Narrator, it seems to resolve
the issue.






Re: possible issue with NVDA on the Windows 10 log-on screen

Rob Hudson
 

Well, to fire up narrator, you hit a keystroke, which dismisses the lock screen.
There is a way to turn it off:
Type windows+r and then
gpedit.msc
You'll end up in a tree view
Find administrative templates and hit right arrow
Find control panel and hit right arrow
Find personalization and hit tab
Activate the option that says
Do not display the lock screen
Once you have that off, your login prompt should show up normally.

----- Original Message -----
From: "Bob Cavanaugh" <cavbob1993@gmail.com>
To: nvda@nvda.groups.io
Date: Tue, 12 Jan 2021 10:57:13 -0800
Subject: Re: [nvda] possible issue with NVDA on the Windows 10 log-on screen

I am on version 2004, build 19041.685. I don't understand why the lock
screen would be a problem, as before I fire up Narrator, I try
pressing tab a few times, and this happens so rarely I usually don't
worry about it, and every time I fire up Narrator, it seems to resolve
the issue.


Re: possible issue with NVDA on the Windows 10 log-on screen

Bob Cavanaugh <cavbob1993@...>
 

I am on version 2004, build 19041.685. I don't understand why the lock
screen would be a problem, as before I fire up Narrator, I try
pressing tab a few times, and this happens so rarely I usually don't
worry about it, and every time I fire up Narrator, it seems to resolve
the issue.

On 1/11/21, Quentin Christensen <quentin@nvaccess.org> wrote:
Not having a go at you Ian, but just in general can people please not say
things like: "Windows 10 machine with the latest build Windows 10". It's
really not as helpful as it seems. My computer refused to do updates for
about six months last year - it was just a glitch, but it never gave me an
error message unless I looked on the update screen - I could easily have
assumed I had "the latest build", when it fact I could have been two major
versions behind. Also Microsoft release different builds to different
users - with their major builds they stagger them so it can take three
months before it is rolled out around the world, no matter how many times a
particular use hits "check for updates". Even then, because of various
things including the exact hardware your machine has, or even a feature
Microsoft want to roll out to half the users, they might roll out a major
build but two slightly different versions with it being completely random
who gets one and who gets the other. That is why I ask for the specific
build number - in general but also in this case. If this is a bug which
has crept back into Windows, even just practically, we can't go to
Microsoft and say "a user has reported it in the latest build" - they'll
need to know the exact build - and even before that - if other users come
forward and say they can replicate it and they're all on the same build,
and other users can't and they're on a newer build, it's likely it's been
fixed (or conversely if they're all on older builds, it's likely it's a new
bug crept in) - and all of those users might be on "the latest build" (they
can get).

Again, not picking on you or anyone, just trying to explain why it is
really very helpful if you could please provide which build number you are
using when reporting something (and the more the better - if it's a problem
you're having opening a document in Word, then the version of Windows, NVDA
and Office could all be useful).

Kind regards

Quentin.

On Tue, Jan 12, 2021 at 6:19 PM Ian Blackburn <ianblackburn@westnet.com.au>
wrote:

I get this on my Windows 10 machine with the latest build Windows 10 but
the solution is to press any key are you just press the enter key and
then
your password box shows up mind you I have two accounts on my machine
And I see this is normal behaviour as I think but I can’t see it but I
think the screen shows the date when you beat the machine then you have
to
click somewhere and that gives you your logon screens as I can’t see I
can’t give you the exact reason but it’s been happening ever since I’ve
had
Windows 10 I don’t see it as a problem as it’s more a quirk of the
operating system I mean you’ve got it least it talking to you solution
press enter and then go ahead regards Ian


On 12 Jan 2021, at 2:24 pm, Quentin Christensen <quentin@nvaccess.org>
wrote:


Which build of Windows are you using? We saw this issue a couple of
years
ago, but I hadn't heard of it happening recently.

You can get your build by opening "winver" - press the windows key, type
winver and press enter.

Quentin.

On Tue, Jan 12, 2021 at 5:03 PM Rob Hudson <rob_hudson3182@opopanax.net>
wrote:

What's happening is your lock screen is getting in the way. Try hitting
the control key or some other key, and you should get jumped into your
regular login screen.

----- Original Message -----
From: "Bob Cavanaugh" <cavbob1993@gmail.com>
To: nvda@nvda.groups.io
Date: Mon, 11 Jan 2021 21:54:15 -0800
Subject: [nvda] possible issue with NVDA on the Windows 10 log-on screen

Hi everyone,
I'm not sure what's causing this, but there seems to be an intermitant
issue with NVDA on the log-on screen of Windows 10. Yesterday was a
frustrating morning for me computer-wise, requiring several restarts.
After one such restart, NVDA just kept speaking the first letter of my
password after typing it several times. Usually it speaks that letter
once, then the password box comes up and I can type normally. Why the
log-on screen behaves this way I'm not sure, but it's something I've
gotten used to having Windows 10 for 3 and a half years. Once in a
while however, NVDA doesn't want to focus on anything on the log-on
screen. I tab around and it doesn't speak. So, I fire up Narrator,
which usually causes both screen readers to speak at the same time,
and from then on NVDA also works properly. I'm not sure if there's
anything the developers can do to track this down and fix it, but if
there is I'd like to see it done.
Bob









--
Quentin Christensen
Training and Support Manager

Web: www.nvaccess.org
Training: https://www.nvaccess.org/shop/
Certification: https://certification.nvaccess.org/
User group: https://nvda.groups.io/g/nvda
Facebook: http://www.facebook.com/NVAccess
Twitter: @NVAccess <https://twitter.com/NVAccess>


--
Quentin Christensen
Training and Support Manager

Web: www.nvaccess.org
Training: https://www.nvaccess.org/shop/
Certification: https://certification.nvaccess.org/
User group: https://nvda.groups.io/g/nvda
Facebook: http://www.facebook.com/NVAccess
Twitter: @NVAccess <https://twitter.com/NVAccess>





6001 - 6020 of 86664