• @[email protected]
    link
    fedilink
    English
    8211 days ago

    The LLM peddlers seem to be going for that exact result. That’s why they’re calling it “AI”. Why is this surprising that non-technical people are falling for it?

    • @[email protected]
      link
      fedilink
      English
      6
      edit-2
      10 days ago

      That’s why they’re calling it “AI”.

      That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.

      Edit: look at this diagram if you’re still unsure:

      • @[email protected]
        link
        fedilink
        English
        1310 days ago

        In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.

        • @[email protected]
          link
          fedilink
          English
          410 days ago

          You have that backwards. People are using the colloquial definition of AI.

          “Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.

          A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.

        • @[email protected]
          link
          fedilink
          English
          3
          edit-2
          10 days ago

          Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.

      • @[email protected]
        link
        fedilink
        English
        810 days ago

        The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.

        • @[email protected]
          link
          fedilink
          English
          310 days ago

          You might want to look up the definition of intelligence then.

          By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.

          I’ll leave this graphic here to help you visualize what I mean:

          • FippleStone
            link
            fedilink
            English
            810 days ago

            Please do post this graphic again, I don’t think I’ve quite grasped it yet

          • Nikelui
            link
            fedilink
            English
            010 days ago

            Oh, yes. I forgot that LLM have creativity, abstract thinking and understanding. Thanks for the reminder. /s

            • @[email protected]
              link
              fedilink
              English
              110 days ago

              It’s not a requirement to have all those things. Having just one is enough to meet the definition. Such as problem solving, which LLMs are capable of doing.

      • @[email protected]
        link
        fedilink
        English
        07 days ago

        What is this nonsense Euler diagram? Emotion can intersect with consciousness, but emotion is also a subset of consciousness but emotion also never contains emotion? Intelligence does overlap at all with sentience, sapience, or emotion? Intelligence isn’t related at all to thought, knowledge, or judgement?

        Did AI generate this?

          • @[email protected]
            link
            fedilink
            English
            16 days ago

            Not everything you see in a paper is automatically science, and not every person involved is a scientist.

            That picture is a diagram, not science. It was made by a writer, specifically a columnist for Medium.com, not a scientist. It was cited by a professor who, by looking at his bio, was probably not a scientist. You would know this if you followed the citation trail of the article you posted.

            You’re citing an image from a pop culture blog and are calling it science, which suggests you don’t actually know what you’re posting, you just found some diagram that you thought looked good despite some pretty glaring flaws and are repeatedly posting it as if it’s gospel.

            • @[email protected]
              link
              fedilink
              English
              15 days ago

              You’re citing an image from a pop culture blog and are calling it science

              I was being deliberately facetious. You can find similar diagrams from various studies. Granted that many of them are looking at modern AI models to ask the question about intelligence, reasoning, etc. but it highlights that it’s still an open question. There’s no definitive ground truth about what exactly is “intelligence”, but most experts on the subject would largely agree with the gist of the diagram with maybe a few notes and adjustments of their own.

              To be clear, I’ve worked in the field of AI for almost a decade and have a fairly in-depth perspective on the subject. Ultimately the word “intelligence” is completely accurate.

      • @[email protected]
        link
        fedilink
        English
        -8
        edit-2
        10 days ago

        I’m not gonna lie, most people like you are afraid to entertain the idea of AI being conscious because it makes you look at your own consciousness as not being all that special or unique.

        Do you believe in spirits, souls, or god genes?

        • @[email protected]
          link
          fedilink
          English
          7
          edit-2
          10 days ago

          No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.

          • @[email protected]
            link
            fedilink
            English
            110 days ago

            That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.

            I agree that it’s not conscious now, but someday it could be.

            • @[email protected]
              link
              fedilink
              English
              310 days ago

              That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.

              Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.

              It’s such a weirdly binary viewpoint.

    • sunzu2
      link
      fedilink
      511 days ago

      You don’t have to be tech person to see through bullshit. Any person with mid level expertise can test the limits of the current LLM capabilities. It can’t provide consistently objectively correct outputs. It is still a useful tool though.

        • sunzu2
          link
          fedilink
          111 days ago

          Education was always garbage though. It is designed to generate obidient wage slaves. Any person who wanted to get good always knew that self study is the only way to get leveled up.

          Your coworkers have no incentive to train you. This has also started since at least 1990s. Just how corpos operate.

          Point I am making, none of this is new or specific to gen z

          I guess covid is unique to them tho but covid didn’t make education shite, it just exposed it imho