• nonentity@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    4
    ·
    2 days ago

    I’ll never understand how an explosively imprecise, statistically luke-warm, grey goo extrusion sphincter could ever be mistaken for intelligence.

    AI doesn’t exist, it’s a vacuous marketing term.

    LLMs have vanishingly narrow legitimate, defensible use cases, but their output is intrinsically inaccurate, and should never be used without supervision from relevant domain experts.

    • texture@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      theres plenty of legitimate use cases. your comment just sounds like youre repeating what everyone else says about it.

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        There could be many use cases, and some of them may even be legitimate, but I’m yet to observe any which have broad applicability, and they should only ever be wielded by a responsible, expert adult.

        • architect@thelemmy.club
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Oh boy wonder what kind of gatekeeping would come out of locking tech away unless you’re a “responsible expert adult”!

          Jesus Christ not a single bit of thought went into what you said, did it?

          You want to lock everyone but experts away? Who decides what an expert is, potus? The gop? The treasonous institute known as Stanford?

          Is it only people with degrees? People making a certain amount? People that can pay a large licensing fee?

          What a stupid thing to say.

          • nonentity@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            ‘AI’ fluffers sure do love the taste of grift flavoured tokens.

            I’d ask what you were thinking, but it’s clear that played no material role in this extrusion. Extrapolating the assertion I constrained to a specific topic to the entirety of ‘tech’ is a bellowing straw man.

            Further, the exclusively US centric examples of inappropriate stewards reveals the vantage to be squarely rooted inside that noxious bubble. The invocation of treason further betrays an affinity for national subservience.

            To refine my original point, my observation of the application of LLMs is that the only entities who find them impressive are those who expressly lack proven expertise in the area it’s being applied. The correlation appears to be nearly linearly, inversely proportional.

            LLMs could eventually prove innately useful, but there’s no indication they’re close to that, let alone traversing a relevant vector.

            Personally, any world populated with entities who are impressed by LLMs is a world not worth living in.

        • texture@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          its a multitool, its applicability needent be broad imo. anyway, im glad we arent speaking in absolutes.

    • dogzilla@masto.deluma.biz
      link
      fedilink
      arrow-up
      3
      arrow-down
      5
      ·
      2 days ago

      @nonentity @technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        It’s telling that you assumed the description applied exclusively to LLMs.

        No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.

        • astronaut_sloth@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          THANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.

          What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.

          • howrar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            LLMs are still a facet of AI though. It sounds like they’re saying it shouldn’t be categorized as AI at all.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          I’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.

          It’s kind of like how we’ve lost the war on hacking.

          Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.

          Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.