Re: The Inside Story of NVDA: what a screen reader is and is not, possibilities and constraints of screen readers #NVDA_Internals


On Mon, Sep 19, 2022 at 03:07 AM, Brian's Mail list account wrote:
I think the biggest loss and the one most program and web site developers cannot get their heads around is the over view issue. Yes you can list headers, lists links buttons and interactive areas, but the mental pictures we get are different to what the sighted will have.
And this is a problem that has no solution.  At least if you consider the problem to be, "the mental pictures we get are different to what the sighted will have."

If you have never been able to see, even the phrase "mental pictures" is misleading, as a picture/image is literally visually based.  If you've never been able to see you cannot form a mental picture.  You can, and do form a mental conception, framework, understanding, or any number of other accurate words related to how material is arranged to be worked with, but none of those is accurately termed "a picture."

Even I, as a sighted person, would not claim to have any sort of mental picture were I blindfolded and presented with a webpage, for example.  I have quite a bit of knowledge about what's on it based on what the screen reader tells me, but I have no idea where it is relative to anything else or what it literally looks like.  And I certainly don't have any idea of what the thing as a whole looks like or contains.

Joseph Lee coined the term "information blackout" (which I love and have alluded to, repeatedly) to describe the difference in the situation between what a screen reader user has immediate access to versus what a sighted person has access to when using a visually-based medium.  And I can assure you that most everything related to the web has a huge visually-based component that we who are sighted take in all at one time, as a gestalt, and filter out what's irrelevant to us at the moment beneath the level of even being conscious we've done so.  You can't have that, either, with a screen reader as the technology has no way to divine the intent of its user and it cannot make decisions about the salience of what's on a given page.

Even if AI is introduced at some point that does a very good job of dismissing with the clutter, initially anyway, you will still not be able to deal with "the whole" at one time as there's no way to do the translation from a fully laid-out page (web or otherwise) into verbal terms that can present it all at once.  That's a "lost in translation" situation where no solution exists because one sensory modality is being substituted for another.  Certain things are specific to vision, just as they are to hearing, taste, smell, and touch.  You just can't do a direct conversion from one to the other without "data loss."

Brian - Windows 10, 64-Bit, Version 21H2, Build 19044  

It is well to open one's mind but only as a preliminary to closing it . . . for the supreme act of judgment and selection.

       ~ Irving Babbitt

Join to automatically receive all group messages.