I found the aeticle in a post on the fediverse, and I can’t find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it’s internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don’t “know” how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I’m aware LLM dont “know” anything and don’t reason, and it’s exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

  • Em Adespoton@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 days ago

    The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.

    The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.

    • theunknownmuncher@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      3 days ago

      You’re confusing the confirmation that the LLM cannot explain it’s under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        3 days ago

        No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          3 days ago

          To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with “grab it”), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.

          Instead, we found that Claude plans ahead. Before starting the second line, it began “thinking” of potential on-topic words that would rhyme with “grab it”. Then, with these plans in mind, it writes a line to end with the planned word.

          🙃 actually read the research?

          • glizzyguzzler@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.