Unable to Read Accented Characters with NVDA


Suhas D
 

Hey all! I didn't know if I should post this in the main group or not, so I'm posting it here.

I've recently noticed that      NVDA doesn't read accented characters like this one "é".

I reset the config to factory defaults, but without any luck.

Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.

Is it just me or doesn't Windows One core voices support reading accented characters?

If not, is there a way to make it read accented characters?


Also, is there a way to measure the rate of speech that NVDA speaks? I'm not talking about the percentage.


Thank you


---
Suhas
Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard


Gene
 

The only way I know to measure or get an approximate idea of the speech rate is to take text you already have determined the number of words for such as by using the word count in a word processor, then seeing how long it takes NVDA to speak it at whatever speech setting you are using.


Gene

On 2/15/2022 7:04 AM, Suhas D wrote:

Hey all! I didn't know if I should post this in the main group or not, so I'm posting it here.

I've recently noticed that      NVDA doesn't read accented characters like this one "é".

I reset the config to factory defaults, but without any luck.

Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.

Is it just me or doesn't Windows One core voices support reading accented characters?

If not, is there a way to make it read accented characters?


Also, is there a way to measure the rate of speech that NVDA speaks? I'm not talking about the percentage.


Thank you


---
Suhas
Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard


Gene
 

I don't know if there is any way, such as using the speech dictionary to have this done.  My Windows One Core voices also doesn't read é.  This sounds like a complaint that should  be directed to Microsoft Accessibility but just where, someone else might be able to tell you.


Gene

On 2/15/2022 7:04 AM, Suhas D wrote:

Hey all! I didn't know if I should post this in the main group or not, so I'm posting it here.

I've recently noticed that      NVDA doesn't read accented characters like this one "é".

I reset the config to factory defaults, but without any luck.

Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.

Is it just me or doesn't Windows One core voices support reading accented characters?

If not, is there a way to make it read accented characters?


Also, is there a way to measure the rate of speech that NVDA speaks? I'm not talking about the percentage.


Thank you


---
Suhas
Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard


 

On Tue, Feb 15, 2022 at 08:04 AM, Suhas D wrote:
Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.
-
And, again, there you have it (and not just for NVDA, either).

Screen readers don't have anything to do with "the actual reading part," but pass off strings (be they single characters or longer) to a synth and the synth pronounces them.

If a particular letter, word, phrase is not read correctly by a given synth, contacting the maker of that particular synth and registering your displeasure is the way to go.

The only way I can think of to force change this with a voice dictionary (preferable to default dictionary) would be to enter the single character as "whole word" and play around with whatever replacement string you can find that ends up saying what you want that letter to sound like.  If you used the default dictionary the change you made would carry over to every synth and likely screw up the correct pronunciation by those synths that already do so.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

Under certain circumstances, profanity provides a relief denied even to prayer.

        ~ Mark Twain


Quentin Christensen
 

Brian is right - one of the first things to try in a "this isn't being read properly" situation is to try it with a different synthesizer.  In this case, OneCore isn't reading any difference between e with an accent é and e.  But you found that SAPI5 DOES read it differently, and I just tested and eSpeak-NG does as well.

Another alternative workaround would be to edit the punctuation / symbol pronunciation list and add an entry for é to read "e acute" (or however you want it read.  

On Wed, Feb 16, 2022 at 1:22 AM Brian Vogel <britechguy@...> wrote:
On Tue, Feb 15, 2022 at 08:04 AM, Suhas D wrote:
Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.
-
And, again, there you have it (and not just for NVDA, either).

Screen readers don't have anything to do with "the actual reading part," but pass off strings (be they single characters or longer) to a synth and the synth pronounces them.

If a particular letter, word, phrase is not read correctly by a given synth, contacting the maker of that particular synth and registering your displeasure is the way to go.

The only way I can think of to force change this with a voice dictionary (preferable to default dictionary) would be to enter the single character as "whole word" and play around with whatever replacement string you can find that ends up saying what you want that letter to sound like.  If you used the default dictionary the change you made would carry over to every synth and likely screw up the correct pronunciation by those synths that already do so.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

Under certain circumstances, profanity provides a relief denied even to prayer.

        ~ Mark Twain



--
Quentin Christensen
Training and Support Manager


Suhas D
 

Brian, Quentin, and Gene, thanks for all the replies.

I think the best solution here is to contact Microsoft and let them know about this.

Also, speaking about adding the characters  to the speech dictionary, I don't think that's a best solution for me. I will have to add a lot of accented characters.


And Gene, I've already tried out the method you suggested to count the speech rate. I forgot to mention it.


---

Suhas

Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard
On 2/16/2022 8:51, Quentin Christensen wrote:

Brian is right - one of the first things to try in a "this isn't being read properly" situation is to try it with a different synthesizer.  In this case, OneCore isn't reading any difference between e with an accent é and e.  But you found that SAPI5 DOES read it differently, and I just tested and eSpeak-NG does as well.

Another alternative workaround would be to edit the punctuation / symbol pronunciation list and add an entry for é to read "e acute" (or however you want it read.  

On Wed, Feb 16, 2022 at 1:22 AM Brian Vogel <britechguy@...> wrote:
On Tue, Feb 15, 2022 at 08:04 AM, Suhas D wrote:
Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.
-
And, again, there you have it (and not just for NVDA, either).

Screen readers don't have anything to do with "the actual reading part," but pass off strings (be they single characters or longer) to a synth and the synth pronounces them.

If a particular letter, word, phrase is not read correctly by a given synth, contacting the maker of that particular synth and registering your displeasure is the way to go.

The only way I can think of to force change this with a voice dictionary (preferable to default dictionary) would be to enter the single character as "whole word" and play around with whatever replacement string you can find that ends up saying what you want that letter to sound like.  If you used the default dictionary the change you made would carry over to every synth and likely screw up the correct pronunciation by those synths that already do so.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

Under certain circumstances, profanity provides a relief denied even to prayer.

        ~ Mark Twain



--
Quentin Christensen
Training and Support Manager


Quentin Christensen
 

Definitely contacting Microsoft is the best option to have the OneCore voices fixed.  The Disability Answer Desk is probably the best channel for that feedback: https://www.microsoft.com/en-us/accessibility/disability-answer-desk and you can also use the feedback hub is another option.

Kind regards

Quentin.

On Thu, Feb 17, 2022 at 2:10 AM Suhas D <ignisdraco7@...> wrote:

Brian, Quentin, and Gene, thanks for all the replies.

I think the best solution here is to contact Microsoft and let them know about this.

Also, speaking about adding the characters  to the speech dictionary, I don't think that's a best solution for me. I will have to add a lot of accented characters.


And Gene, I've already tried out the method you suggested to count the speech rate. I forgot to mention it.


---

Suhas

Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard
On 2/16/2022 8:51, Quentin Christensen wrote:
Brian is right - one of the first things to try in a "this isn't being read properly" situation is to try it with a different synthesizer.  In this case, OneCore isn't reading any difference between e with an accent é and e.  But you found that SAPI5 DOES read it differently, and I just tested and eSpeak-NG does as well.

Another alternative workaround would be to edit the punctuation / symbol pronunciation list and add an entry for é to read "e acute" (or however you want it read.  

On Wed, Feb 16, 2022 at 1:22 AM Brian Vogel <britechguy@...> wrote:
On Tue, Feb 15, 2022 at 08:04 AM, Suhas D wrote:
Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.
-
And, again, there you have it (and not just for NVDA, either).

Screen readers don't have anything to do with "the actual reading part," but pass off strings (be they single characters or longer) to a synth and the synth pronounces them.

If a particular letter, word, phrase is not read correctly by a given synth, contacting the maker of that particular synth and registering your displeasure is the way to go.

The only way I can think of to force change this with a voice dictionary (preferable to default dictionary) would be to enter the single character as "whole word" and play around with whatever replacement string you can find that ends up saying what you want that letter to sound like.  If you used the default dictionary the change you made would carry over to every synth and likely screw up the correct pronunciation by those synths that already do so.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

Under certain circumstances, profanity provides a relief denied even to prayer.

        ~ Mark Twain



--
Quentin Christensen
Training and Support Manager



--
Quentin Christensen
Training and Support Manager


Suhas D
 

Thank you! I'll provide the feedback to Microsoft.


---
Suhas
Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard
On 2/17/2022 3:46, Quentin Christensen wrote:

Definitely contacting Microsoft is the best option to have the OneCore voices fixed.  The Disability Answer Desk is probably the best channel for that feedback: https://www.microsoft.com/en-us/accessibility/disability-answer-desk and you can also use the feedback hub is another option.

Kind regards

Quentin.

On Thu, Feb 17, 2022 at 2:10 AM Suhas D <ignisdraco7@...> wrote:

Brian, Quentin, and Gene, thanks for all the replies.

I think the best solution here is to contact Microsoft and let them know about this.

Also, speaking about adding the characters  to the speech dictionary, I don't think that's a best solution for me. I will have to add a lot of accented characters.


And Gene, I've already tried out the method you suggested to count the speech rate. I forgot to mention it.


---

Suhas

Sent from Thunderbird

“To avoid criticism say nothing, do nothing, be nothing.”
Elbert Hubbard
On 2/16/2022 8:51, Quentin Christensen wrote:
Brian is right - one of the first things to try in a "this isn't being read properly" situation is to try it with a different synthesizer.  In this case, OneCore isn't reading any difference between e with an accent é and e.  But you found that SAPI5 DOES read it differently, and I just tested and eSpeak-NG does as well.

Another alternative workaround would be to edit the punctuation / symbol pronunciation list and add an entry for é to read "e acute" (or however you want it read.  

On Wed, Feb 16, 2022 at 1:22 AM Brian Vogel <britechguy@...> wrote:
On Tue, Feb 15, 2022 at 08:04 AM, Suhas D wrote:
Then I changed the synth to SAPI5 and, NVDA started reading out the accented characters.
-
And, again, there you have it (and not just for NVDA, either).

Screen readers don't have anything to do with "the actual reading part," but pass off strings (be they single characters or longer) to a synth and the synth pronounces them.

If a particular letter, word, phrase is not read correctly by a given synth, contacting the maker of that particular synth and registering your displeasure is the way to go.

The only way I can think of to force change this with a voice dictionary (preferable to default dictionary) would be to enter the single character as "whole word" and play around with whatever replacement string you can find that ends up saying what you want that letter to sound like.  If you used the default dictionary the change you made would carry over to every synth and likely screw up the correct pronunciation by those synths that already do so.
--

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044

Under certain circumstances, profanity provides a relief denied even to prayer.

        ~ Mark Twain



--
Quentin Christensen
Training and Support Manager



--
Quentin Christensen
Training and Support Manager