Synth Development - How do you add a new language to NVDA?
Moderator's Note: I'm granting a dispensation on this question because there may be members who can direct the questioner to the correct resources, and I have no idea what those are. This is definitely not an NVDA question, or synth compatibility with NVDA question, but one about creating a synth for a specific language that does not as yet have one. That's definitely out of scope for this group. If anyone knows of the correct contacts for this kind of development, please feel free to share that information. But any deep delves into actual synth development really should take place elsewhere.
Yesterday was National Indigenous Peoples Day 2021 in Canada
David Berman commented on LinkedIn that there are no Canadian Indigenous languages available in screen readers, which made me sad.
The most commonly spoken indigenous language in the province I am from is Mi'kmaq. While I do not speak Mi'kmaq, I've always been interested in learning. More importantly, I do have access to some native speakers as I have a cousin who teaches at a school where Mi'kmaq is commonly spoken.
Is there a cookbook somewhere on how to do this? Is anyone interested in volunteering to help?
650 703 2376
As Brian pointed out, this is not really an NVDA issue.
You don't "add a language to NVDA" in that sense.
You add a language to a particular speech synthesizer.
Your best bet is likely to look into adding a new language to the eSpeak-NG synth, as it is the primary open source speech synth used with NVDA, and likely the one most welcoming of new languages.
I don't know that to be true, but I doubt you will get Eloquence (Code Factory, which is really just selling a product from a much larger company), or Microsoft's OneCore team, to listen to you for a language that will only ever be needed by a tiny fraction of an already small