Re: multilanguage speech one core voices


Jonathan COHN
 

Generally one has to tag the language that is being used and not just switch based on the alphabet being used. In HTML one can define a “lang” attribute for a span or document. So, to have HTML spoken in English the standard process is to include

<span lang=”en”>Some English text</span>

 

In JAWS it is also possible to define a set of glyphs to be spoken in a non-default language. I don’t know enough about the Dictionary in NVDA to know if that is possible in that screen reader too.

 

HTH,

 

Jonathan Cohn

 

 

From: <nvda@nvda.groups.io> on behalf of amir din <mrdin8877@...>
Reply-To: "nvda@nvda.groups.io" <nvda@nvda.groups.io>
Date: Friday, April 27, 2018 at 11:00 PM
To: "nvda@nvda.groups.io" <nvda@nvda.groups.io>
Subject: [nvda] multilanguage speech one core voices

 

Hi,

I am using nvda with speech one core voices on win10. I have installed several voices for this speech synthesizer, in different languages. I am using Arabic  as my windows default language, but sometimes I wil encounter English words written in a b c d e, so I want to make the speech synthesizer switch to English voice when it detects a b c d e, and switch back to Arabic voice when it encounters Arabic text?

 

If it is not available on current nvda, I think it should be added.

 

 

 

 

Dihantar dari Mel untuk Windows 10

 

Join nvda@nvda.groups.io to automatically receive all group messages.