AI could provide some minor deshitification of the internet by answering obvious questions implied by clickbaity titles. In other words, comb the link and pop up, in simplest terms, what a title baits you with.
For instance, a browser plugin that could pop up a balloon showing “It’s Portland, Oregon” when you hover your mouse over “One US city likes its food carts more than any other”. Or “Tumbling Dice” when you hover over “The Stones’ song that Mick Jagger hates to sing”. Even give “Haggle over the price and options” on the classic clickbait “Car dealers don’t want you to know this one trick!”. All without you having to sift through pages of crap filler text (likely AI generated) and included ads to satisfy trivial curiousity you might be baited by.
I wouldn’t even mind too much if the service collected and sold the fact that I did (or didn’t) get curious about the related topics. It would still be fewer ads in the face overall. So maybe monetizing like that could motivate someone to develop a service?
Or would that just make the net worse?
A lot of the issue with this is that we are talking about a really energy-intensive way of solving this non-problem.
A better way is to train humans to stop falling for the bait. That is also rather hard though but I’m pretty sure you can already get browser plugins that identify click bait headlines and just, hides them.
If we can get the costs to read an summarize an article down (and get an AI that understands things like facts and source quality) then there are a bunch of things it could do for us. Interpreting contracts and TOS bollocks come to mind, but LLMs as we have them today can’t do that. They might end up part of the tool chain but they are presently insufficient.