The next evolution of NVDA?


Luke Robinett
 

As a home recording enthusiast, I use various types of music production software in conjunction with NVDA. There’s a very interesting NVDA plug-in that basically grants users access to graphical interfaces that are otherwise completely inaccessible. you run into a lot of these types of apps in the music world, such as various software instruments and effects plugins. anyway, I don’t quite understand how it does it’s magic but basically it uses optical character recognition to figure out what’s on the interface and you can then navigate its various controls like any other app. I do believe it requires specific templates to be developed for whatever software you are using it with. I’m just wondering, it seems like this type of approach could signal the next evolution of screen reader technology. Imagine if a screen reader could use artificial intelligence and machine learning along with existing strategies to grant us access to basically any interface, whether or not it followed accessibility standards. I wonder if such a thing will be possible in the near future outside of these very specific use cases like I described. Interesting food for thought.

Join nvda@nvda.groups.io to automatically receive all group messages.