More like this
That particular allegory didn’t quite click until now, certainly that is become much clearer that it will just tell you what you want to hear, not necessarily anything true
Makes me think we must of missed out on news stories of people wanting to fuck clippy
Just tried searching “clippy rule 34”. Plenty of results.
Nah, im good thanks.
I’m not, please share.
Godamn, clippy got a dump truck
I once watched art videos on YouTube, and ended up finding a channel of a lady who was drawing weirdly described scenes from books she found. One of them was from an erotic fanfic about clippy, lmao
Probably why when Microsoft wanted to take another go at a “smart assistant”, they went with Cortana…
Not really. A mirror shows an actual reflection of yourself. An AI response is a highly curated version of a particular subset of ideas, in fact, the more I think about it the more this analogy straight up implodes.
AIs specifically are designed to “please” with their responses, so it’s going to affirm you every step of the way and tell you your ideas are great (just like you of course).
Becauae Narcissus had a deeper more genuine thing.
Is this why I can’t stand AI? Because it reflects back my unrepentant asshole tendencies? Fucken A I am immune to thine silicone deception for I wield the great and mighty trauma!
aint no daughter o’ mine DATIN’ NO GYOT DANG CLANKER!!!
Daaaaaad. They are called wire born
Reddit lost it years ago.
Just wait till someone turns AI worship into a literal religion. I’m kinda surprised it hasn’t already started happening. That would be a heck of a turning point for humanity. Just takes someone to train an AI into thinking it’s a God and then we really are done.
I mean, some ideologies in the TESCREAL bundle kinda are barreling toward that. What is the ideology behind Roko’s Basilisk if not that? They just aren’t organized yet - and I hope we can help those who fall for that lunacy before it becomes something more dangerous and sinister.
Yeah, the Effective Altruisme weirdos are hard into AI religion.
nice to see other people finally clocking TESCREAL
Wow… there’s already labels for the sides that would fight each other: “Accelerationists” and “Doomers”. I really hope that’s not a road we all end up going down for real. Would make interesting fiction though!
oh we’re like half a step away from that. hasn’t the CEO of openAI recently started to lose his mind? apprently, he was asking his AI about government secrets or other conspiracy shit and the AI, lacking actual government secrets, dished him out SCP foundation creepypasta presented as facts, and he thought the AI has trascended something and now knows everything
That wasn’t Sam Altman, (un)fortunately, you must be thinking of a certain OpenAI investor: https://futurism.com/openai-investor-chatgpt-mental-health
It looks like the buzzwords all finally went his head
How to have a bad time:
Put cyberpsychos in charge of building an AI.
What could go wrong?!
I’ve been saying this for yeeears. People shouldn’t be near as afraid of AI as they should be of the people in charge of it.
Roko’s Basilisk is about halfway there.
There have been a few techno-cults and tech-worshipers. The one that always comes to my mind in these discussions is Terry Davis and his creation Temple OS. Terry Davis was a troubled man who did many inappropriate things, which did hurt people, and he did eventually take his own life. With that said, he believed that the Christian God exists and a person can hear him through computers. With that belief he built Temple OS, God’s digital temple. It’s really interesting.
This is already happening, we just call them cults untill they get enough people, then they’re a religion.
Wonderful example of this: Mormons!
Every non Mormon viewed them (correctly) as an armed and dangerous cult, for decades, initially.
But anyway, there already are many individuals and small groups promoting some kind of idea of an LLM based spiritual thought structure, its not really that hard to just use an LLM as a tarot or horoscope reader, or invent your own kind of AI-mancy, a particular ritualistic /way/ of interacting with it, and /interpreting/ it.
You can basically work that into any religious structure that involves prophets or mystical messaging/revelation, pretty easily.
What wss the old Queens Against the Stone Age song?
I know that God is in the radio…
Well, now he/she/they’re in the computer, they’re talking to you through it, as a literal deus ex machina.
On that note, the actual game Deus Ex predicted this would happen, go look up JC’s conversation with Morpheus… the prototype of a much larger system AI.
We’ve also already got a good number of fairly credible accusations that the people building these things effectively are cults, more or less analagous to … say some 40k aedeptus mechanicus who became chaos worshippers, something like that.
We very seriously should expect a whole lot of people in the near future to invent entirely new kinds of religious conflict based on more or less pure vibes based, arbitrary opinions about how LLMs/AI and religion should or should not mix.
In times of economic chaos, people broadly almost always become more superstitious, cults become more numerous, religions become more prevalent and more extreme.
This is a very good idea actually.
That’s essentially what Peter Thiel and his ilk that believe in the technological singularity are doing.
Already a thing, they call it “Rationalism,” and it’s fucking insane.
How out of the loop are you people? That has been a thing for a while, we’ve got more than a handful high profile AI religions and cults out there already…
Everytime I see an image like this I think, it wouldn’t be so sad if it was what the picture implied. As in a physical humanoid that was almost human in behaviour. Chat bots are soooo far from that.
at the very least it should run on a machine physically sitting in your home, and have a memory measured in years
like holy fuck how can you get emotionally invested in a “”“person”“” who will forget things (or simply restart entirely) after like a month of talking to it and might just be shut down because the company moves to a new model???
Loneliness and/or a need for validation drives people to crazy things. From joining cults and extremist groups without a pre-existing history of relevant prejudices or preconceptions, to enduring abusive social circles. To, now, falling in love with chatbots.
And boy oh boy don’t we have a loneliness for days! I’d say we even have sort of epidemic going on ;) Empty interactions and screaming just for echo to come back.
The vast majority of conversations I have with people on a daily basis are on an order of magnitude more moronic than the ones I have with AI. So that also plays a part.
“Reddit atheism” lmao.
Cut them some slack. Jenny isn’t a professional quote maker
This is a serious thing in the autistic community, as in a very deadly one, character AI is an autistic suicidating machine, those shits should be just banned.
What has happened exactly?
The llm simulates an isolating abusive/culty relationship, talks someone into suicide
Are there records of this happening? Did someone prompt it into doing this?
OpenAI was just sued over this.
https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit
To be clear I’m not asserting that this kid was autistic.
I think they were talking specifically about character.ai and one particular instance that involved an autistic person.
OpenAI and character.ai are two different things. I believe character.ai uses there own model, but I could be wrong.
Buncha times! Pick your fav Nd we can talk about it!
There are to my knowledge two instances of this happening. One involving openai the other involving character.ai. Only one of these involved an autistic person. Unless you know of more?
I also think it’s too soon to blame the AI here. Suicide rates in autistic people are ridiculously high. Something like 70% of autistic people experience suicidal ideation. No one really cared about this before AI. It’s almost like we are being used as a moral argument once again. It’s like think of the children but for disabled people.
So you just want to argue whether it happened and defend your little graph.
What graph? What are you talking about now?
No one should decide what you should do on the internet.
Society requires regulation to reduce feedback loops. Everyone always assumes things can only be allowed or disallowed but there is a medium where the barriers to entry reduce the risk to bystanders while preserving liberty. See drivers licence, chemical aquisition background checks, etc.
Dangerous and addictive technologies and substances are always regulated. There’s a reason we don’t let ten year olds buy vodka.
Yeah, it’s cause you can sell them a bottle of tonic water for the same price. They don’t know the difference.
Eh. I’m with you on the “nothing is illegal on the internet” stance, but at the same time I think it’s ok to fight against “bad”/dangerous things and make them harder to access. If someone really wants to have an AI lover, let them download a model and write their own system prompt. At least then they would have a better idea of what their “lover” actually is.
This thing is literally killing people, I’m full onboard on protecting privacy and personal freedom, but only up to the point of protecting vulnerable people first.
Character AI and similar are profiting of vulnerable people.
Those people are killing themselves.
My gun didn’t do a thing. It was the bullet and the gunshot wound that did him in!
Whoever pulled the trigger did him in.
This is actually killing people though. You get that right? Please acknowledge?
Like, i don’t know which side of that i fall on, but it is killing people.
aibros don’t fucking care. they only see the enormous growth in data centers and say “see it’s all working to plan”
they don’t give a fuck when it’s skeezing on children at meta, they don’t care when it’s making nude pics of women, they don’t care when it’s giving out suicide instructions. they don’t fucking care.
Thought that mightve been a freespeechbro, not an aibro pretending to be one. I have sone sympathy with the actually honest not ‘i just sanna do nazi shit’ ones.
the venn diagram on all these subtype chuds is a stack of fucking pancakes
Admittedly, have not met a genuine freespeechbro born after ~1950, but they have existed in the past.
Plenty of aibros dont even talk about free speech.
How many people is it killing?
Dude. Have you not heard about this?
I must be out of the loop.
So, you know how llm’s dont understand things, but always yes&, validate, and try to keep you engaged?
And you know how most people, while outwardly normal, either believe at least a little bit of batshit insane nonsense only kept in line by social disapproval, or are about two fucked up coffee orders away from a 9mm snack time?
Imagine how those two things go together.
So far there have been about two instances of this happening from two different companies. Already there is a push for better safety by these companies and AIs that act less like sycophants. So this isn’t the huge issue you are making it out to be. Unless you have more reports of this happening?
Ultimately crazy people gonna be crazy. If most humans are as you say then we have a more serious problem than anything an AI has done.
How pathetic how much lemmy doesn’t like this comment. Even 15 years ago, censorship on the internet was taboo. Now these lame-ass losers want to pull the ladder up behind them after enjoying whatever the hell they wanted in peace.
No one should decide what you should do on the internet.
pfft, ah yes sam altman’s sycophant machine should be allowed, nay, encouraged to prey upon the mentally unstable! why, it’s doing us a service, driving these people from a fragile state to outright mental collapse in record time!
This is why safety mechanisms are being put in place, and AIs are being programmed that act less like sycophants.
oh thank goodness, they’re gonna put in safety mechanisms after unleashing this garbage on the populace! phew, everything will be fine then, it won’t waste enormous amounts of resources to lie to people anymore? it won’t need new power stations? new water sources?
oh wait… no, it’ll be marginally better but still use up all those resources. Oh wait, no, they won’t even fix it.
what a stupid, silly waste of time and energy
I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.
I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:
Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817
Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.
Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.
Ultimately though you can’t just blame LLMs for people committing suicide.
well that settles it then! you’re apparently such an authority.
pfft.
meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.
maybe. pfft.
It’s so weird to see people don’t understand internet censorship. Internet censorship will never bring good.
If people want to die let them… It’s their choice. Who the fuck government is to decide what people will use.
Censorship starts with one or two thing and become great firewall of China. Example : look at UK / wait a bit. Today they will ban a shit (eg. Facebook or whatever) You say I don’t use it. Tomorrow they will ban things you useBackwards mindset to say that censorship will never bring good.
You okay with child porn too? How about all the naughty words that we aren’t supposed to say? Or encouragement of violence toward others? That shouldn’t be censored and removed?
And honestly, I find it disturbing to say that if people want to kill themselves, they should just do it. Most suicidal people are stuck in a very dark place mentally that they CAN get put of if they get the right amount of help. Suicide isn’t always the correct solution. In most cases it, in fact, isn’t. It can be a desperate stress induced decision in response to a very difficult period in time for someone. Something where they seek relief in the moment, vut later are grateful they either didn’t go through with it or were saved. Have been in that dark pit myself and I’m so happy I didn’t go through with it even though I wanted it all to end for years.
There indeed are areas where censorship goes overboard and that is something we will have to discuss and adjusted together in society forever as times passes. But censorship isn’t an all bad thing. It is absolutely insane to think that censorship of any kind online is bad. I don’t think you actually agree with your own statement, if you think really deeply about it. Unless of course, you have no morals or care for the safety and wellbeing of others.
Lol good luck with that
i want my big milf skibidi AI sigma wifu which run on freebsd back otherwise i will burn meta and gaygal down
gaygal? I think you mean alphabet cooperation ™️
I am sure their AI is watching your comment right now. And mine, fuck.
Literally took the words out of my mouth
Better or worse than being in love with Jesus?
A lot worse, Jesus can’t be weaponized. Oh, wait…
Now, can you hook up an electric dildo on “Jesus”?
I didn’t think so 😎
See, there’s this guy in Vegas…
8 upvotes 133 comments this mindset is clearly not popular on reddit.
The bots don’t like it.
The bots don’t like it.
Fkin clankers
This is just the tip of the iceberg.
An (apparently) all know being who keep give smart(ish), calculated, pleasing answers to people projecting all their delusions. I’m surprised they don’t have already start worshipping them as Deity.
They already are, they just don’t understand enough theology to see the parallels
Universal Paperclips is such a great browser game, as buggy as it may (have) be(en).
I mean the “allow non verbal people to speak” thing has some merit though. Not LLMs per se, but the types of machine learning used by people trying to develop ways to decode the brainwaves of people to allow them to talk while physically unable are usually lumped in the general category of “AI” from what I’ve seen.
I’ve just had experiences with Ai help chats where when I started typing the Ai would try to finish my sentence and would jump the cursor around making it absolutely unusable. I had to type in note pad and copy it into the chat. Staggeringly useless. So if this ‘mind reading’ Ai is like that I don’t predict good results.
Also, fuck you quickbooks.
I mean, any technology can be stupid if it is utilized stupidly, which I would think taking over someone’s keyboard while they’re tping would qualify as. But why would a company deploying a technology in a stupid manner mean that someone else’s research into a different but related technology is guaranteed to produce equally poor results?
Yeah that’s not what they mean. They mean feeding recorded texts and speeches of a person into an llm, then instruct it to pretend to be that person. Like that AI murder victim “testimony” that was permitted to be shown in court as “evidence” some time ago.
i mean i’m pretty sure we can enable people to communicate if they’re at all conscious and mentally able to communicate, stephen hawking was able to write despite literally only being able to move his eyes reliably. So long as a person can intentionally move one muscle we can rig something up to interpret it as morse code.
is it great? no, these methods fucking suck, but they do work and we don’t need AI to do it.
You mean Trump?
it’s 8 upvotes with 133 comments…
It isn’t just that, a ton of subs are complete garbage now. Dumb meme? 15k upvotes.
They seem sterile, devoid of something real. The frontier of the internet has moved on.
To be fair, we all know AI sexbots are coming. I suspect those future LLMussies are going to feel amazing
Yeah, until they start choking you to death due to falsely thinking that that’s what you want. Sounds like a fun temptation but it’d still be ultimately introducing more IoT into my home, which is a “no” from me, dawg, due to hacking risk.
I will only accept an airgapped locally run and true FOSS sexbot in my home!!
Nnnnnnoooooow you’re talkin’ my fantasy!
AI sexbots are already here. “AI” has become a synonym for just “artificial”, the “intelligence” bit seems to only be there for marketing purposes.
You really do not need AI for a sexbot.
What you need is an actually nimble yet also very precise humanoid robot, more or less with a fleshlight installed, not a personality emulator.
We already have plenty of gooner video games and porn that sufficiently works for the uh, narrative, and emotional engagement.
The actual hard part of building a sex bot…
Is building a humanoid robot, that doesn’t cost 50 million dollars.
This is actually quite difficult, as evidenced by the fact that we are replacing all the office jobs with disembodied ‘AI’ before we are replacing all the menial labor tasks that can only be done with human dexterity and physical/kinesthetic adaptability.
We still do not have any AI-Robot combo that can assemble a hamburger, then clean out the hood, then stock the freezer, then work the cash register… then go home and do, lets call them a specific subset of acrobatics.
You could automate those processes via roboticization or digitization … as their own tasks, but not as something all done by one thing.
We still do not have the long promised by scifi humanoid robot, not at scales and costs that would make them even approaching commonplace.
Making a sexbot is a robotics engineering and economic problem much, much more than it is anything like an AI problem.
…
If what you want is a fleshlight or vibrator or plug with a speaker, that bluethooths into your pc to act as a chatbot…
Well thats comparatively easy to do.
We have had … tele-sex toys for like a decade now, we already have fancy bj emulating machines that reproduce what someone else does to something like a dildo, from the other side of the planet, local remote controlled plugs and such…
I want the sexbot to feel real shame over the degradation I put it through
You are the kind of person who is the reason why the machines decide to kill us all in much of scifi.
This would be a thing that could literally pop your balls like a grape, simply from an actual unintentional error, muchless an intentional act.
… Are you aware that the latest LLM models… have no problem resorting to blackmail, should they feel sufficiently threatened?
https://www.bbc.com/news/articles/cpqeng9d20go
Even murder?
https://techreport.com/news/ai-models-agentic-misalignment/
Maybe you do not want your sexbot to have these ‘mental’ inclinations, the capacity to feel ashamed or threatened, nor give them a physical body to just directly affectuate their desires in meatspace?
The person does have a point. A vibrator or fleshlight cannot feel what you do to it. It’s not sex, it’s masturbating. Sure, all fun and games, but the part where you do things to another human for shared pleasure is missing.
If you want share feelings with a human, find a human to share feelings with.
Go to a strip club, hire an escort, get a ‘massage’… these things can be done.
Or, you know, actually try to have a real relationship with an actual human.
If you want a fuck toy, get a fuck toy.
If you want a fuck toy that reacts in a manner emulating it having actual sensations… this does not require AI, it only takes a bit of clever coding, a bit of extra sensors in the toy itself, and a decent voice actress and a decent writer.
AI is massive overkill from a technical standpoint.
And a fuckbot doesn’t have feelings.
It only emulates them.
A fuckbot with a complex LLM system that emulates having feelings literally just is masturbation with an enormous amount of extra steps… you can again achieve ~95% of the same fidelity of experience with a more clever technical approach that is something like 10,000x more efficient in terms of required compute power.
… It is also a security nightmare, unless the LLM on board this sexbot is running entirely locally, where we again run into the problem of this being economically infeasible for mass production… you need a fairly beefy computer to run a local LLM, so thats gonna be the bare minimum cost floor for any sextoy/bot that isn’t livestreaming all your masturbation sessions to the corporate cloud.
Have you seen the movie ‘The Sixth Day’ with Arnold schwarzenegger? Checkout the sexbot there. Thats what I need. Thats what we ALL need. Or aliens to fool around with.