My stomach is churning already knowing I'm about to type a short-sighted hot take related to LLMs, but I do wonder what a screen reader would look like that could provide a "summarized" version of any given web page (assumingly via LLM). Basically allow the user to swap between the full page rendered with current methodology / presentation of content and links, and a version of the same page with a summarized version of the text content + a collated, deduped section of actions found in the content.
ex.
To download W3C's editor/browser Amaya, [click here].
[Download Amaya]
[Click here] to get Amaya for Windows
All collapse into something singular and sensible like [Download Amaya installer for Windows here] as an action inside the action section.
I don't know. I should probably put on a sleeping mask and navigate the web via a screen reader one of these days to really experience how things are.
When we can run our own models that are good enough on local hardware (practically) it'll really take off, I believe AI accelerators in end user electronics will revolutionize how we utilize computers.
> I don't know. I should probably put on a sleeping mask and navigate the web via a screen reader one of these days to really experience how things are.
The difference is that it wouldn't be like experiencing it through a screen reader, it'd be like experiencing it with a screen reader that you can't use and will never be motivated enough to learn. Some blind people are known to listen to code in "reading speed" which is pretty incredible.
You'd be like standing on skis for the first time, or using Vim
ex.
To download W3C's editor/browser Amaya, [click here].
[Download Amaya]
[Click here] to get Amaya for Windows
All collapse into something singular and sensible like [Download Amaya installer for Windows here] as an action inside the action section.
I don't know. I should probably put on a sleeping mask and navigate the web via a screen reader one of these days to really experience how things are.