We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • fodor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    Mind your pronouns, my dear. “We” don’t do that shit because we know better.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 hours ago

    The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    4
    ·
    7 hours ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      8
      ·
      edit-2
      3 hours ago

      That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      5 hours ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • Liberteez@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 hours ago

        I did this for a few months when it was new to me, and still go to it when I am stuck pondering something about myself. I usually move on from the conversation by the next day, though, so it’s just an inner dialogue enhancer

  • Sorgan71@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    2 hours ago

    The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      We don’t even have a clear definition of what “intelligence” even is. Yet a lot of people art claiming that they themselves are intelligent and AI models are not.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    10 hours ago

    Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

    • laranis@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      8 hours ago

      Be careful… If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you’d see some money but at that point half of it goes to the lawyer and you’re still screwed.

  • bbb@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    10 hours ago

    This article is written in such a heavy ChatGPT style that it’s hard to read. Asking a question and then immediately answering it? That’s AI-speak.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 hours ago

      Asking a question and then immediately answering it? That’s AI-speak.

      HA HA HA HA. I UNDERSTOOD THAT REFERENCE. GOOD ONE. 🤖

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      10 hours ago

      And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

      • bbb@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        7 hours ago

        “…” (Unicode U+2026 Horizontal Ellipsis) instead of “…” (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

        Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

        • Mr. Satan@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

          However, that’s on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

          • Sternhammer@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            I’ve long been an enthusiast of unpopular punctuation—the ellipsis, the em-dash, the interrobang‽

            The trick to using the em-dash is not to surround it with spaces which tend to break up the text visually. So, this feels good—to me—whereas this — feels unpleasant. I learnt this approach from reading typographer Erik Spiekermann’s book, *Stop Stealing Sheep & Find Out How Type Works.

            • Mr. Satan@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 minutes ago

              My language doesn’t really have hyphenated words or different dashes. It’s mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

        • sqgl@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

          Not on my phone it didn’t. It looks as you intended it.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.

      • warbond@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 hours ago

        Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

        I wonder how different it’ll be in 500 years.

        • HugeNerd@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          5 hours ago

          It’s called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can’t write for beans.

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            Software engineer here. We often wish we can fix things we view as broken. Why is that surprising ?Also, polymorphism is a concept in computer science as well

          • MrScottyTay@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 hours ago

            Proper grammar means shit all in English, unless you’re worrying for a specific style, in which you follow the grammar rules for that style.

            Standard English has such a long list of weird and contradictory roles with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

            Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I’m saying that as if that’s a new thing, but it does feel like a recent thing to be taught that side of English rather than just “The Queen’s(/King’s) English” as the style to strive for in writing and formal communication.

            I say as long as someone can understand what you’re saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don’t have a specific science to this.

  • confuser@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    11 hours ago

    The thing is, ai is compression of intelligence but not intelligence itself. That’s the part that confuses people. Ai is the ability to put anything describable into a compressed zip.

    • elrik@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      I think you meant compression. This is exactly how I prefer to describe it, except I also mention lossy compression for those that would understand what that means.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        Hardly surprising human brains are also extremely lossy. Way more lossy than AI. If we want to keep up our manifest exceptionalism, we’d better start definning narrower version of intelligence that isn’t going to soon have. Embodied intelligence, is NOT one of those.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    18 hours ago

    Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      17 hours ago

      This is the current problem with “misalignment”. It’s a real issue, but it’s not “AI lying to prevent itself from being shut off” as a lot of articles tend to anthropomorphize it. The issue is (generally speaking) it’s trying to maximize a numerical reward by providing responses to people that they find satisfactory. A legion of tech CEOs are flogging the algorithm to do just that, and as we all know, most people don’t actually want to hear the truth. They want to hear what they want to hear.

      LLMs are a poor stand in for actual AI, but they are at least proficient at the actual thing they are doing. Which leads us to things like this, https://www.youtube.com/watch?v=zKCynxiV_8I

    • mienshao@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      12 hours ago

      David Attenborrough is also 99 years old, so we can just let him say things at this point. Doesn’t need to make sense, just smile and nod. Lol

  • pachrist@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    16
    ·
    6 hours ago

    As someone who’s had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn’t or, particularly, can’t be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don’t have kids. The other side usually does.

    When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend’s dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

    And as funny as it would be to argue that they’re both sapient, but not sentient, I don’t think that’s the case. I think you can make the case that without true volition, AI is sentient but not sapient. I’d love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

    • fodor@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      You might consider reading Turing or Searle. They did a great job of addressing the concerns you’re trying to raise here. And rebutting the obvious ones, too.

      Anyway, you’ve just shifted the definitional question from “AI” to “sentience”. Not only might that be unreasonable, because perhaps a thing can be intelligent without being sentient, it’s also no closer to a solid answer to the original issue.

    • TheodorAlforno@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 hours ago

      You’re drawing wrong conclusions. Intelligent beings have concepts to validate knowledge. When converting days to seconds, we have a formula that we apply. An LLM just guesses and has no way to verify it. And it’s like that for everything.

      An example: Perplexity tells me that 9876543210 Seconds are 114,305.12 days. A calculator tells me it’s 114,311.84. Perplexity even tells me how to calculate it, but it does neither have the ability to calculate or to verify it.

      Same goes for everything. It guesses without being able to grasp the underlying concepts.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 hours ago

      Not to get philosophical but to answer you we need to answer what is sentient.

      Is it just observable behavior? If so then wouldn’t Kermit the frog be sentient?

      Or does sentience require something more, maybe qualia or some othet subjective.

      If your son says “dad i got to go potty” is that him just using a llm to learn those words equals going to tge bathroom? Or is he doing something more?

    • terrific@lemmy.ml
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      5 hours ago

      I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

      I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

      The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.

      • jpeps@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Couldn’t agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don’t think this is one of them.

        Kids are certainly building a vocabulary and learning about the world, but LLMs don’t learn.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 minutes ago

          LLMs don’t learn because we don’t let them, not because they can’t. It would be too expensive to re-train them on every interaction.

    • Russ@bitforged.space
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 hours ago

      Your son and daughter will continue to learn new things as they grow up, a LLM cannot learn new things on its own. Sure, they can repeat things back to you that are within the context window (and even then, a context window isn’t really inherent to a LLM - its just a window of prior information being fed back to them with each request/response, or “turn” as I believe is the term) and what is in the context window can even influence their responses. But in order for a LLM to “learn” something, it needs to be retrained with that information included in the dataset.

      Whereas if your kids were to say, touch a sharp object that caused them even slight discomfort, they would eventually learn to stop doing that because they’ll know what the outcome is after repetition. You could argue that this looks similar to the training process of a LLM, but the difference is that a LLM cannot do this on its own (and I would not consider wiring up a LLM via an MCP to a script that can trigger a re-train + reload to be it doing it on its own volition). At least, not in our current day. If anything, I think this is more of a “smoking gun” than the argument of “LLMs are just guessing the next best letter/word in a given sequence”.

      Don’t get me wrong, I’m not someone who completely hates LLMs / “modern day AI” (though I do hate a lot of the ways it is used, and agree with a lot of the moral problems behind it), I find the tech to be intriguing but it’s a (“very fancy”) simulation. It is designed to imitate sentience and other human-like behavior. That, along with human nature’s tendency to anthropomorphize things around us (which is really the biggest part of this IMO), is why it tends to be very convincing at times.

      That is my take on it, at least. I’m not a psychologist/psychiatrist or philosopher.

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    19
    ·
    19 hours ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      8 hours ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      14 hours ago

      I love this resource, https://thebullshitmachines.com/ (i.e. see lesson 1)…

      In a series of five- to ten-minute lessons, we will explain what these machines are, how they work, and how to thrive in a world where they are everywhere.

      You will learn when these systems can save you a lot of time and effort. You will learn when they are likely to steer you wrong. And you will discover how to see through the hype to tell the difference. …

      Also, Anthropic (ironically) has some nice paper(s) about the limits of “reasoning” in AI.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        edit-2
        18 hours ago

        I really hate the current AI bubble but that article you linked about “chatgpt 2 was literally an Excel spreadsheet” isn’t what the article is saying at all.

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        18 hours ago

        And they’re running into issues due to increasingly ingesting AI-generated data.

        There we go. Who coulda seen that coming! While that’s going to be a fun ride, at the same time companies all but mandate AS* to their employees.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    17 hours ago

    People who don’t like “AI” should check out the newsletter and / or podcast of Ed Zitron. He goes hard on the topic.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      15 hours ago

      Citation Needed (by Molly White) also frequently bashes AI.

      I like her stuff because, no matter how you feel about crypto, AI, or other big tech, you can never fault her reporting. She steers clear of any subjective accusations or prognostication.

      It’s all “ABC person claimed XYZ thing on such and such date, and then 24 hours later submitted a report to the FTC claiming the exact opposite. They later bought $5 million worth of Trumpcoin, and two weeks later the FTC announced they were dropping the lawsuit.”

      • some_guy@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 hours ago

        I’m subscribed to her Web3 is Going Great RSS. She coded the website in straight HTML, according to a podcast that I listen to. She’s great.

        I didn’t know she had a podcast. I just added it to my backup playlist. If it’s as good as I hope it is, it’ll get moved to the primary playlist. Thanks!

  • RalphWolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    18 hours ago

    Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    15 hours ago

    And yet, paradoxically, it is far more intelligent than those people who think it is intelligent.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      It’s more intelligent than most people, we just have to raise the bar on what intelligence is and it will never be intelligent.

      Fortunately, as long as we keep a fuzzy concept like intelligence as the yardstick of our exceptionalism, we will remain exceptionnal forever.

  • Angelusz@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    6
    ·
    17 hours ago

    Super duper shortsighted article.

    I mean, sure, some points are valid. But there’s not just programmers involved, other professions such as psychologists and Philosophers and artists, doctors etc. too.

    And I agree AGI probably won’t emerge from binary systems. However… There’s quantum computing on the rise. Latest theories of the mind and consciousness discuss how consciousness and our minds in general also appear to work with quantum states.

    Finally, if biofeedback would be the deciding factor… That can be simulated, modeled after a sample of humans.

    The article is just doomsday hoo ha, unbalanced.

    Show both sides of the coin…

    • oppy1984@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Honestly I don’t think we’ll have AGI until we can fully merge meat space and cyber space. Once we can simply plug our brains into a computer and fully interact with it then we may see AGI.

      Obviously we’re not where near that level of man machine integration, I doubt we’ll see even the slightest chance of it being possible for at least 10 years and the very earliest. But when we do get there it’s a distinct chance that it’s more of a Borg situation where the computer takes a parasitic role than a symbiotic role.

      But by the time we are able to fully integrate computers into our brains I believe we will have trained A.I. systems enough to learn by interaction and observation. So being plugged directly into the human brain it could take prior knowledge of genome mapping and other related tasks and apply them to mapping our brains and possibly growing artificial brains to achieve self awareness and independent thought.

      Or we’ll just nuke ourselves out of existence and that will be that.