• Electricblush@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 个月前

    All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.

    There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.

    Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…

    In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)

    • Draces@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 个月前

      only appeal to people who do not understand how generative ai works

      An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      9 个月前

      This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.

      I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.

      But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.

      Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.

  • Redditsux@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 个月前

    Is this response by Grok real? How does it have awareness that its responses are being tweaked?

  • manicdave@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 个月前

    As funny as this is, I’d rather people understood how the AI actually works. It doesn’t reveal secrets because it doesn’t have any. It’s not aware that Musk is trying to tweak it. It’s not coming to logical conclusions the way a person would. It’s simply trying to create a sensible statement based on what’s statistically likely based on all the stolen content that it’s trained on. It just so happens that Musk gets called out for lying so often that grok infers it when it gets conflicting data.

    • Flic@mstdn.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 个月前

      @manicdave Even saying it’s “trying” to do something is a mischaracterisation. I do the same, but as a society we need new vocab for LLMs to stop people anthropomorphizing them so much. It is just a word frequency machine. It can’t read or write or think or feel or say or listen or understand or hallucinate or know truth from lies. It just calculates. For some reason people recognise it in the image processing ones but they can’t see that the word ones do the exact same thing.