• snooggums
    link
    fedilink
    English
    233 days ago

    AI returns incorrect results.

    In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.

    • FaceDeer
      link
      fedilink
      33 days ago

      It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.

    • thedruid
      link
      fedilink
      English
      03 days ago

      Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.

      I can’t believe this Is the supposed high level of discourse on lemmy

      • @[email protected]
        link
        fedilink
        English
        -6
        edit-2
        2 days ago

        I can’t believe this Is the supposed high level of discourse on lemmy

        Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.

        • thedruid
          link
          fedilink
          English
          -22 days ago

          I know. it would be a lot better world if a. I apologists could just admit they are wrong

          But nah. They better than others.