Re: NVDA's handling of checkboxes especially in Google Chrome
Gene,
We could very easily get into the weeds about the screen reader buffer, but if you wish to pursue this privately I have no objections to that. It's just not really relevant as an on-group topic. But I will say that even the description you give of a webpage is not what most of them look like, by a long shot. That's the core of the problem - how a webpage is laid out visually is a primary part of its design. The web is a visual medium taken as a whole. And those of us who are sighted have all sorts of expectations about how something "should be laid out" that we aren't even consciously aware of because the conventions for that arrangement crosses many mediums - books, magazines, periodicals, webpages, etc. But it's become abundantly clear to me that the HTML, as written to render that visual product, and what becomes of it in the virtual buffer are in many ways unrelated. That is not surprising in and of itself, because when you strip a medium, any medium, of a primary feature in order to make it accessible to those who cannot use that primary feature, it automatically becomes "a different beast." I have said, repeatedly, that all accessibility is a workaround, and no matter how good it is or it gets, you will never have 100% accessibility to information in the way that sighted individuals do, and that's no fault of your own, of the accessibility developers, or of sighted people. Joseph Lee put it very well to me once, in private correspondence I believe, but it may have been in the Windows 10 group that sighted people take in web pages (and I'll add not just those) as a gestalt. We are taking in the totality of the thing, auto-filtering out what's generally irrelevant (for a webpage, think about all those links that reside at the bottom/end that most of us have never activated in our lives), and distilling it into something where the relevance of what's presented visually is being categorized by the viewer in the way that's most meaningful to them. It's just how visual processing works. But that cannot be even approximated with a screen reader, even one with AI components. A screen reader cannot know what's relevant to you, as an individual, when you land on a webpage. It must present you with a number of things that for most people, sighted or blind, are irrelevant clutter simply because, on rare occasion, one of those bits of clutter just may be what you're looking for. It happens. It's also impossible for a screen reader to have focus on multiple things literally at the same time, and you can (for all practical intents and purposes) have focus on multiple things at once with vision because split attention switching happens imperceptibly, very rapidly, and you just notice things. That's one of the reasons I use ad blockers, because blinking, scrolling, pop-ups, and the like drive me to distraction, and they're used with abandon in web advertising. In my tutorial, Mass Selection and Deletion of Gmail Messages via the Gmail Web Interface, I said this: "This is where the trick comes in because the screen reader cannot focus on two places at once. After doing the select all the first 100 messages (or whatever you've chosen as your maximum to display) are shown as selected and focus remains on the select menu. However, if you've got way more than that 100 a link is also now up on the screen that allows you to cause the remaining messages not visible to be selected as well. " There is just no way a screen reader can ever accurately imitate what "occurs naturally" when you're processing a screen visually as a whole (the gestalt). I instantly see that change at the top of that screen for that selection option if enough messages exist for it to appear. I don't have to think about it in almost any way, I don't have to shift focus to it in any meaningful way, I just notice it's appeared and use it if I need to. A screen reader has to have focus somewhere, on something, and while there is functionality to handle certain very specific types of screen areas that may have constantly changing status being presented, that doesn't come close to all of the possible ways that "a change of significance" gets shown visually. There really are times when there is just no substitute for vision, and by that I mean how vision works to direct your attention. Designers rely upon that, and they should, because the majority of the world has sight, and there would be no logical reason to take arrows out of your design quiver. And there are certain aspects of it that can never be "translated" via accessibility software. You may be able to get 100% of the text content, but the exact how and when you get it as a screen reader user is entirely different than the how and when a sighted person gets it because they can see it and you can't. And this is what Joseph Lee brilliantly termed "information blackout." This is just a fact, not a value judgment, and I doubt that any mechanism other than artificial sight (and probably not even that, as there are aspects of all our senses that "bake in" if we have them from birth that don't just appear if we somehow get a substitute later) could possibly make even a rough equivalency. Lack of any sense creates certain chasms that cannot be crossed, no matter how much anyone wants to make them crossable. -- Brian - Windows 10, 64-Bit, Version 21H1, Build 19043 The ignorance of one voter in a democracy impairs the security of all. ~ John F. Kennedy
|
|