We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    4
    ·
    14 hours ago

    It very much isn’t and that’s extremely technically wrong on many, many levels.

    Yet still one of the higher up voted comments here.

    Which says a lot.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Calling these new LLM’s just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

      This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

      5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU’s.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 hours ago

      Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.