Re: multilanguage speech one core voices
Generally one has to tag the language that is being used and not just switch based on the alphabet being used. In HTML one can define a “lang” attribute for a span or document. So, to have HTML spoken in English the standard process is to include
<span lang=”en”>Some English text</span>
In JAWS it is also possible to define a set of glyphs to be spoken in a non-default language. I don’t know enough about the Dictionary in NVDA to know if that is possible in that screen reader too.
From: <firstname.lastname@example.org> on behalf of amir din <mrdin8877@...>
I am using nvda with speech one core voices on win10. I have installed several voices for this speech synthesizer, in different languages. I am using Arabic as my windows default language, but sometimes I wil encounter English words written in a b c d e, so I want to make the speech synthesizer switch to English voice when it detects a b c d e, and switch back to Arabic voice when it encounters Arabic text?
If it is not available on current nvda, I think it should be added.
Dihantar dari Mel untuk Windows 10