• @[email protected]
    link
    fedilink
    English
    7
    edit-2
    10 days ago

    No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.

    • @[email protected]
      link
      fedilink
      English
      110 days ago

      That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.

      I agree that it’s not conscious now, but someday it could be.

      • @[email protected]
        link
        fedilink
        English
        310 days ago

        That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.

        Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.

        It’s such a weirdly binary viewpoint.