locked Re: blank line reading by NVDA

Luke Davis
 

On Tue, 8 Oct 2019, Brian Vogel wrote:

Luke,            I am not going to get into a technical argument.   But it makes no sense to say that you don't process text that gets passed to a synth
prior to its being passed to it.  None of the dictionaries, to my knowledge, post process text after it goes to the synth, but are used to decide what,
exactly, gets passed to the synth.
When did I say anything post processes text *after* it goes to the synth? That would be ridiculous.
What I said was:

"
What we do have available, are speech dictionaries. But with that we have a problem, because by the time the dictionaries get applied, it's far too late in the process to even see the original text. All we get to work with is a layer between the text that is going to be sent to a synth, and the text that actually arrives at a synth.
"

Since that seems unclear to you, I will try to say it another way.

1. NVDA takes the contents of the (in Notepad, for example) multi line edit field.
It will see a blank line as just an empty line.
It is at this point that your regexp matching empty lines would work.

2. NVDA does whatever internal things it needs to to the text in question (in this case nothing), to figure out if it needs to report fonts, underlining, indenting, etc.

3. A bunch of other stuff that is irrelevant to the discussion at hand.

4. By now, if this was a text string like "hello world", that was indented, underlined, and with font announcements turned on, the text NVDA planned to speak, would probably look like this:

Arial 15 point indent 2.0 underlined Hello world

In our case, we still have empty text. Since NVDA knows the synth needs to actually say something, instead of sending it empty text, the text will be:

blank

Either way, that is now what I called "the text that is going to be sent to a synth".

5. Apply any dictionaries that are to be applied to transform the text. This will be default, temporary, and if appropriate, synth. I have no idea in which order.

6. Send the text to be spoken, as revised by the dictionaries, to the synth. This is what I called "the text that actually arrives at a synth"

So the bottom line of what I'm telling you, is that in order for your regexp to work, the dictionaries would need to be applied all the way down at step 1 or 2. In fact they are applied very late in the process, long after the original text has been turned into a synth-friendly form.
Note that I am ignoring translation into languages in this process.

          I don't even know what is referenced by "translation layer" and that may be the problem.  But the final part
of the process has to be handing off that which is to be spoken to the synthesizer that speaks it.  All changes to same must occur prior to that step.
Obviously. But there is a big difference in applying those changes at the start of processing of incoming text, and applying them right at the end of the processing chain.

At the start, the text has the nature of input. You can view it as input, and if you had a hook that early, you could manipulate the raw input. But the average user has no such hook. The only hook we have, is to process the text after it has been converted into speech-style output, by running it through speech dictionaries.

The point I was trying to make to you, was that we are not pre-processing text as it enters NVDA.
We are post-processing text, as it is leaving NVDA and on its way into the synth.

Dictionaries are (very nearly if not actually) the last step in the processing chain for text. The text might have little resemblance to the actual input by that point, it is mainly made up of words that NVDA wants the user to hear--we are processing the text right before it becomes audio.
By then an empty line isn't the empty string, it is the word "blank" which is what NVDA needs the user to hear.

Luke

Join nvda@nvda.groups.io to automatically receive all group messages.