• @[email protected]
    link
    fedilink
    English
    51 year ago

    maybe the whole damn thing is outsourced to ChatGPT now, who the fuck knows.

    I don’t understand why so many people assume an LLM would make glaring errors like this…

    • @[email protected]
      link
      fedilink
      English
      141 year ago

      …because they frequently do? Glaring errors are like, the main thing LLMs produce besides hype.

      • @[email protected]
        link
        fedilink
        English
        28
        edit-2
        1 year ago

        They make glaring errors in logic, and confidently state things that are not true. But their whole “deal” is writing proper sentences based on predictive models. They don’t make mistakes like the excerpt highlighted.

        • @[email protected]
          link
          fedilink
          English
          41 year ago

          Y’know what, that’s a fair point. Though I’m not the original commenter from the top, heh.

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          I’m imagining that the first output didn’t cover everything they wanted so they tweaked it and pasted the results together and fucked it up.

        • @[email protected]
          link
          fedilink
          English
          11 year ago

          Pretty soon glaring errors like this will be the only way to identify human vs LLM writing.

          Then soon after that the LLMs will start producing glaring grammatical errors to match the humans.

      • Zammy95
        link
        fedilink
        English
        21 year ago

        I think he was being sarcastic lol. I…hope