On Tue, 8 Oct 2019, Luke Davis wrote:
At the start, the text has the nature of input. You can view it as input, and if you had a hook that early, you could manipulate the raw input. But the average user has no such hook. The only hook we have, is to process the text after it has been converted into speech-style output, by running it through speech dictionaries.Maybe a more real world example will help:
On the NVDAHelp list, someone recently asked how to get NVDA to stop saying "edit multi line" when he went into Notepad or similar.
I.E. currently, when you open Notepad, it says:
Untitled - Notepad. Text editor edit multi line. Blank.
He found this two verbose.
In a debug level log (could have been done with input-output logging as well), we can see the following (abbreviated):
Speaking [LangChangeCommand ('en_GB'), u'Untitled - Notepad']
Speaking [LangChangeCommand ('en_GB'), u'Text Editor edit multi line']
Speaking [LangChangeCommand ('en_GB'), u'blank']
The solution to his problem, was to create a dictionary entry containing " edit multi line"
With no replacement.
But most of that text isn't on the screen at all (except probably "Notepad - untitled").
The strings that are going to be sent to the synth, and thus the only strings the speech dictionaries get to operate on, are the strings that NVDA wants to SAY, not the strings that it actually got from the screen. Yes, screen strings will probably be incorporated in some way, but by the time the dictionaries get them, they are strings to be verbalized, and thus might look very different than the way they actually look on screen.
That's what I meant by saying that the dictionaries are processing output, not processing input.
I hope that clarifies what I've been trying to explain.