I found the aeticle in a post on the fediverse, and I can’t find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it’s internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don’t “know” how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I’m aware LLM dont “know” anything and don’t reason, and it’s exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    It’s the anthropic article you are looking for, where they performed open brain surgery equivalent to find out that they do maths in very strange and eerily humanlike operations, like they will estimate, then if it goes over calculate the last digit like I do. It sucks as a counting technique though

  • BodilessGaze@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    10
    ·
    2 days ago

    I don’t know how I work. I couldn’t tell you much about neuroscience beyond “neurons are linked together and somehow that creates thoughts”. And even when it comes to complex thoughts, I sometimes can’t explain why. At my job, I often lean on intuition I’ve developed over a decade. I can look at a system and get an immediate sense if it’s going to work well, but actually explaining why or why not takes a lot more time and energy. Am I an LLM?

    • Voldemort@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      22
      ·
      2 days ago

      I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.

      I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we’ve certainly answered that now.

      Everyone being like “oh it’s just a predictive model and it’s all math and math can’t be intelligent” are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don’t understand how we work, of course we’re not gonna understand how it works.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        You’re definitely overselling how AI works and underselling how human brains work here, but there is a kernel of truth to what you’re saying.

        Neural networks are a biomimicry technology. They explicitly work by mimicking how our own neurons work, and surprise surprise, they create eerily humanlike responses.

        The thing is, LLMs don’t have anything close to reasoning the way human brains reason. We are actually capable of understanding and creating meaning, LLMs are not.

        So how are they human-like? Our brains are made up of many subsystems, each doing extremely focussed, specific tasks.

        We have so many, including sound recognition, speech recognition, language recognition. Then on the flipside we have language planning, then speech planning and motor centres dedicated to creating the speech sounds we’ve planned to make. The first three get sound into your brain and turn it into ideas, the last three take ideas and turn them into speech.

        We have made neural network versions of each of these systems, and even tied them together. An LLM is analogous to our brain’s language planning centre. That’s the part that decides how to put words in sequence.

        That’s why LLMs sound like us, they sequence words in a very similar way.

        However, each of these subsystems in our brains can loop-back on themselves to check the output. I can get my language planner to say “mary sat on the hill”, then loop that through my language recognition centre to see how my conscious brain likes it. My consciousness might notice that “the hill” is wrong, and request new words until it gets “a hill” which it believes is more fitting. It might even notice that “mary” is the wrong name, and look for others, it might cycle through martha, marge, maths, maple, may, yes, that one. Okay, “may sat on a hill”, then send that to the speech planning centres to eventually come out of my mouth.

        Your brain does this so much you generally don’t notice it happening.

        In the 80s there was a craze around so called “automatic writing”, which was essentially zoning out and just writing whatever popped into your head without editing. You’d get fragments of ideas and really strange things, often very emotionally charged, they seemed like they were coming from some mysterious place, maybe ghosts, demons, past lives, who knows? It was just our internal LLM being given free rein, but people got spooked into believing it was a real person, just like people think LLMs are people today.

        In reality we have no idea how to even start constructing a consciousness. It’s such a complex task and requires so much more linking and understanding than just a probabilistic connection between words. I wouldn’t be surprised if we were more than a century away from AGI.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do.

        I don’t think this is a fair way of summarizing it. You’re making it sound like we have AGI, which we do not have AGI and we may never have AGI.

        • Voldemort@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          17 hours ago

          Let’s get something straight, no I’m not saying we have our modern definition of AGI but we’ve practically got the original definition coined before LLMs were a thing. Which was that the proposed AGI agent should maximise “the ability to satisfy goals in a wide range of environments”. I personally think we’ve just moved the goal posts a bit.

          Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

          Do you not agree that animal brains are just prediction machines?

          That we have our own hallucinations all the time? Think visual tricks, lapses in memory, deja vu, or just the many mental disorders people can have.

          Do you think our brain doesn’t follow path of least resistance in processing? Or do you think our thoughts comes from elsewhere?

          I seriously don’t think animal brains or human to be specific are that special that nurural networks are beneath. Sure people didn’t like being likened to animals but it was the truth, and I as do many AI researches, liken us to AI.

          AI is primitive now, yet it can still pass the bar, doctors exams, compute complex physics problems and write a book (soulless as it may be like some authors) in less than a few seconds.

          Whilst we may not have AGI the question was about math. The paper questioned how it did 36+59 and it did things in an interesting way where it half predicted what the tens column would be and ‘knew’ what the units column was, then put it together. Although thats not how I or even you may do it there are probably people who do it similar.

          All I argue is that AI is closer to how our brains think, and with our brains being irrational quite often it shouldn’t be surprising that AI nural networks are also irrational at times.

          • ipkpjersi@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            “the ability to satisfy goals in a wide range of environments”

            That was not the definition of AGI even back before LLMs were a thing.

            Wether we’ll ever have thinking, rationalised and possibly conscious AGI is beyond the question. But I do think current AI is similar to existing brains today.

            That’s doing a disservice to AGI.

            Do you not agree that animal brains are just prediction machines?

            That’s doing a disservice to human brains. Humans are sentient, LLMs are not sentient.

            I don’t really agree with you.

            LLMs are damn impressive, but they are very clearly not AGI, and I think that’s always worth pointing out.

            • Voldemort@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              13 hours ago

              The first person to be recorded talking about AGI was Mark Gubrud. He made that quote above, here’s another:

              The major theme of the book was to develop a mathematical foundation of artificial intelligence. This is not an easy task since intelligence has many (often ill-defined) faces. More specifically, our goal was to develop a theory for rational agents acting optimally in any environment. Thereby we touched various scientific areas, including reinforcement learning, algorithmic information theory, Kolmogorov complexity, computational complexity theory, information theory and statistics, Solomonoff induction, Levin search, sequential decision theory, adaptive control theory, and many more. Page 232 8.1.1 Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability

              As UGI largely encompasses AGI we could easily argue that if modern LLMs are beginning to fit the description of UGI then it’s fullfilling AGI too. Although AGI’s definition in more recent times has become more nuanced to replicating a human brain instead, I’d argue that that would degrade the AI trying to replicate biology.

              I don’t beleive it’s a disservice to AGI because AGI’s goal is to create machines with human-level intelligence. But current AI is set to surpase collective human intelligence supposedly by the end of the decade.

              And it’s not a disservice to biological brains to summarise them to prediction machines. They work, very clearly. Sentience or not if you simulated every atom in the brain it will likely do the same job, soul or no soul. It just brings the philosophical question of “do we have free will or not?” And “is physics deterministic or not”. So much text exists on the brain being prediction machines and the only time it has recently been debated is when someone tries differing us from AI.

              I don’t believe LLMs are AGI yet either, I think we’re very far away from AGI. In a lot of ways I suspect we’ll skip AGI and go for UGI instead. My firm opinion is that biological brains are just not effective enough. Our brains developed to survive the natural world and I don’t think AI needs that to surpass us. I think UGI will be the equivalent of our intelligence with the fat cut off. I believe it only resembles our irrational thought patterns now because the fat hasn’t been striped yet but if something truely intelligent emerges, we’ll probably see these irrational patterns cease to exist.

      • patatahooligan@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        1 day ago

        They work the exact same way we do.

        Two things being difficult to understand does not mean that they are the exact same.

        • Voldemort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          17 hours ago

          Maybe work is the wrong word, same output. Just as a belt and chain drive does the same thing, or how fluorescent, incandescent or LED lights produce light even though they’re completely different mechanisms.

          What I was saying is that one is based on the other, so similar problems like irrational thought even if the right answer is conjured shouldn’t be surprising. Although an animal brain and nural network are not the same, the broad concept of how they work is.

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            What I was saying is that one is based on the other

            Not in any direct way, no. At least not in any way more rigorous than handwavey analogies.

      • Saleh@feddit.org
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 day ago

        LLMs among other things lack the whole neurotransmitter “live” regulation aspect and plasticity of the brain.

        We are nowhere near a close representation of actual brains. LLMs to brains are like a horse carriage compared to modern cars. Yes they have four wheels and they move, and cars also need four wheels and move, but that is far from being close to each other.

      • lgsp@feddit.it@feddit.itOP
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 day ago

        Even if LLM “neurons” and their interconnections are modeled to the biological ones, LLMs aren’t modeled on human brain, where a lot is not understood.

        The first thing is that how the neurons are organized is completely different. Think about the cortex and the transformer.

        Second is the learning process. Nowhere close.

        The fact explained in the article about how we do math, through logical steps while LLMs use resemblance is a small but meaningful example. And it also shows that you can see how LLMs work, it’s just very difficult

        • BodilessGaze@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          1 day ago

          I agree, but I’m not sure it matters when it comes to the big questions, like “what separates us from the LLMs?” Answering that basically amounts to answering “what does it mean to be human?”, which has been stumping philosophers for millennia.

          It’s true that artificial neurons are significant different than biological ones, but are biological neurons what make us human? I’d argue no. Animals have neurons, so are they human? Also, if we ever did create a brain simulation that perfectly replicated someone’s brain down to the cellular level, and that simulation behaved exactly like the original, I would characterize that as a human.

          It’s also true LLMs can’t learn, but there are plenty of people with anterograde amnesia that can’t either.

          This feels similar to the debates about what separates us from other animal species. It used to be thought that humans were qualitatively different than other species by virtue of our use of tools, language, and culture. Then it was discovered that plenty of other animals use tools, have language, and something resembling a culture. These discoveries were ridiculed by many throughout the 20th century, even by scientists, because they wanted to keep believing humans are special in some qualitative way. I see the same thing happening with LLMs.

  • glizzyguzzler@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    12
    ·
    2 days ago

    Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

    LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

    Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

    In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

    If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

    It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

    TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

    • WolfLink@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      The environmental toll doesn’t have to be that bad. You can get decent results from single high-end gaming GPU.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 days ago

      I’ve read that article. They used something they called an “MRI for AIs”, and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That’s what I remember of that article.

      But yes, it exists, and it is science, not TicTok

      • glizzyguzzler@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        2 days ago

        You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

        • whaleross@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            People that can not do Matrix multiplication do not possess the basic concepts of intelligence now?

            As a mathematician (at least by education), I think that’s a great definition, yes.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          edit-2
          2 days ago

          Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn’t post a relevant or complete thought

          Your comment is like saying an audio file isn’t really music because it’s just a series of numbers.

          • glizzyguzzler@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            1 day ago

            Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

            “An LLM model isn’t really an LLM because it’s just a series of numbers”

            But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

            And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              1 day ago

              LOL you didn’t really make the point you thought you did. It isn’t an “improper comparison” (it’s called a false equivalency FYI), because there isn’t a real distinction between information and this thing you just made up called “basic action on data”, but anyway have it your way:

              Your comment is still exactly like saying an audio pipeline isn’t really playing music because it’s actually just doing basic math.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 days ago

      It’s a developer option that isn’t generally available on consumer-facing products. It’s literally just a debug log that outputs the steps to arrive at a response, nothing more.

      It’s not about novel ideation or reasoning (programmatic neural networks don’t do that), but just an output of statistical data that says “Step was 90% certain, Step 2 was 89% certain…etc”

        • AnneBonny@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Maybe I should rephrase my question:

          Outside of comment sections on the internet, who has claimed or is claiming that LLMs have the capacity to reason?

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          I don’t want to brigade, so I’ll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to “show it’s work” or explain it’s reasoning. The text response of an LLM cannot be taken at it’s word or used to confirm that kind of theory. It requires tracing the logic under the hood.

          Just like how it’s not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It’s output is always “fake”

          That doesn’t mean there isn’t a real potential element of self preservation, though, but you’d need to dig and trace through the network to show it, not use the text output.

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.

        The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          2 days ago

          You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.

          • Em Adespoton@lemmy.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            2 days ago

            No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              2 days ago

              To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with “grab it”), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

              Instead, we found that Claude plans ahead. Before starting the second line, it began “thinking” of potential on-topic words that would rhyme with “grab it”. Then, with these plans in mind, it writes a line to end with the planned word.

              🙃 actually read the research?

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      19
      ·
      edit-2
      2 days ago

      It’s true that LLMs aren’t “aware” of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like “they can never reason” is provably false.

      Its obvious that you have a bias and desperately want reality to confirm it, but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

      EDIT: lol you can downvote me but it doesn’t change evidence based research

      It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

      Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

      • glizzyguzzler@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        2 days ago

        Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

        The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

        Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

        It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain’t smart, and they ain’t worth destroying the planet.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          15
          ·
          2 days ago

          it’s completing the next word.

          Facts disagree, but you’ve decided to live in a reality that matches your biases despite real evidence, so whatever 👍

          • glizzyguzzler@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            3
            ·
            2 days ago

            It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before

            • Epp2@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              23 hours ago

              Why aren’t they tokens when you use them? Does your brain not also choose the most apt selection for the sequence to make maximal meaning in the context prompted? I assert that after a sufficiently complex obfuscation of the underlying mathematical calculations the concept of reasoning becomes an exercise in pedantic dissection of the mutual interpretation of meaning. Our own minds are objectively deterministic, but the obfuscation provided by lack of direct observation provides the quantum cover fire needed to claim we are not just LLM equivalent representation on biological circuit boards.

      • ohwhatfollyisman@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        but there’s been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

        would there be a source for such research?

          • ohwhatfollyisman@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            2 days ago

            but this article espouses that llms do the opposite of logic, planning, and reasoning?

            quoting:

            Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,

            are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?

            • theunknownmuncher@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              4
              ·
              2 days ago

              No, you’re misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that “sounds” right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can’t ask an LLM to explain its own reasoning. But the article shows how they’ve made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just “autocomplete”

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    2 days ago

    By design, they don’t know how they work. It’s interesting to see this experimentally proven, but it was already known. In the same way the predictive text function on your phone keyboard doesn’t know how it works.

    • lgsp@feddit.it@feddit.itOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      I’m aware of this and agree but:

      • I see that asking how an LLM got to their answers as a “proof” of sound reasoning has become common

      • this new trend of “reasoning” models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading

      • take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?

      So having a good written article at hand is a good idea imho

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        I only follow some YouTubers like Digital Spaceport but there has been a lot of progress from years ago when LLM’s were only predictive. They now have an inductive engine attached to the LLM to provide logic guard rails.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    Define “know”.

    • An LLM can have text describing how it works and be trained on that text and respond with an answer incorporating that.

    • LLMs have no intrinsic ability to “sense” what’s going on inside them, nor even a sense of time. It’s just not an input to their state. You can build neural-net-based systems that do have such an input, but ChatGPT or whatever isn’t that.

    • LLMs lack a lot of the mechanisms that I would call essential to be able to solve problems in a generalized way. While I think Dijkstra had a valid point:

      The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

      …and we shouldn’t let our prejudices about how a mind “should” function internally cloud how we treat artificial intelligence…it’s also true that we can look at an LLM and say that it just fundamentally doesn’t have the ability to do a lot of things that a human-like mind can. An LLM is, at best, something like a small part of our mind. While extracting it and playing with it in isolation can produce some interesting results, there’s a lot that it can’t do on its own: it won’t, say, engage in goal-oriented behavior. Asking a chatbot questions that require introspection and insight on its part won’t yield interesting result, because it can’t really engage in introspection or insight to any meaningful degree. It has very little mutable state, unlike your mind.

  • frañzcoz@feddit.cl
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    2 days ago

    There was a study by Anthropic, the company behind Claude, that developed another AI that they used as a sort of “brain scanner” for the LLM, in the sense that allowed them to see sort of a model of how the LLM “internal process” worked

  • markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    5
    ·
    2 days ago

    “Researchers” did a thing I did the first day I was actually able to ChatGPT and came to a conclusion that is in the disclaimers on the ChatGPT website. Can I get paid to do this kind of “research?” If you’ve even read a cursory article about how LLMs work you’d know that asking them what their reasoning is for anything doesn’t work because the answer would just always be an explanation of how LLMs work generally.

    • lgsp@feddit.it@feddit.itOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Very arrogant answer. Good that you have intuition, but the article is serious, especially given how LLMs are used today. The link to it is in the OP now, but I guess you already know everything…