AI could provide some minor deshitification of the internet by answering obvious questions implied by clickbaity titles. In other words, comb the link and pop up, in simplest terms, what a title baits you with.

For instance, a browser plugin that could pop up a balloon showing “It’s Portland, Oregon” when you hover your mouse over “One US city likes its food carts more than any other”. Or “Tumbling Dice” when you hover over “The Stones’ song that Mick Jagger hates to sing”. Even give “Haggle over the price and options” on the classic clickbait “Car dealers don’t want you to know this one trick!”. All without you having to sift through pages of crap filler text (likely AI generated) and included ads to satisfy trivial curiousity you might be baited by.

I wouldn’t even mind too much if the service collected and sold the fact that I did (or didn’t) get curious about the related topics. It would still be fewer ads in the face overall. So maybe monetizing like that could motivate someone to develop a service?

Or would that just make the net worse?

  • @[email protected]
    link
    fedilink
    621 days ago

    That’s a great idea! I’d love if there’s a way to do it without giving traffic to the clickbait site.

    • @[email protected]
      link
      fedilink
      English
      221 days ago

      If anything it might be better than not giving them traffic in the first place because their ad click though rate will be zero.

  • @[email protected]
    link
    fedilink
    421 days ago

    Seems like it would just feed the arms race of enshittification. It’ll help for a while until new smarter weapons arise against it, and we’re back to the same place while burning more electrons. I don’t see how it sustainably improves things

    Especially since ai search summaries are still so bad. All too often the ai result is wrong or misguided or hallucinating. Maybe I just have to get better at phrasing things, which used to be critical when search was still search, but isn’t the intent that you shouldn’t have to?

    I do use ai all the time and do think it’s useful, but only when keeping in mind its limitations. It can work well as a helpful step to a lot of things, but rarely as a final useful answer/result

  • Rayquetzalcoatl
    link
    fedilink
    English
    321 days ago

    Still don’t feel like we could trust it to not just hallucinate or fail to understand the article properly

    • @[email protected]
      link
      fedilink
      121 days ago

      Trust in journalistic integrity is gone anyway, when everything’s an opinion piece rather than an actual news report.

      My favourite is reading an article on a subject you’re an expert in and evaluating its (in)accuracy, then entirely forgetting about that when reading any others.

      It’s all click bait all the way down.

    • @[email protected]
      link
      fedilink
      021 days ago

      Does it matter? You don’t want hallucinations to screw up understanding of a scholarly or technical article or where it’s critical to be right, but surfing the internet? “News” and opinion? Entertainment? What’s the likelihood of any misunderstanding having an actual impact, and can it be any worse than today’s clickbait headlines?

      • Rayquetzalcoatl
        link
        fedilink
        English
        221 days ago

        If I Google “BBC News” and then mouse over a headline in search results about some world event, I think it is important that the AI isn’t just hallucinating and making up a summary out of whole cloth.

        We’ve seen AI summaries already really; Google has them in search results. We’ve seen them advise using glue on pizza, for instance.

        Funny enough, I just saw a bunch of headlines about Apple and their AI that summarises news headlines. Lots of unhappy customers, because it doesn’t work.

  • peto (he/him)
    link
    fedilink
    English
    221 days ago

    A lot of the issue with this is that we are talking about a really energy-intensive way of solving this non-problem.

    A better way is to train humans to stop falling for the bait. That is also rather hard though but I’m pretty sure you can already get browser plugins that identify click bait headlines and just, hides them.

    If we can get the costs to read an summarize an article down (and get an AI that understands things like facts and source quality) then there are a bunch of things it could do for us. Interpreting contracts and TOS bollocks come to mind, but LLMs as we have them today can’t do that. They might end up part of the tool chain but they are presently insufficient.