

Thanks! I’ll take a look


Thanks! I’ll take a look


Thought it was common knowledge at this point…
Creator of hyperland:
[A trans person] joined the Discord server and made a big deal out of their pronouns […] because they put their pronouns in their nickname and made a big deal out of them because people were referring to them as “he” [misgendering them], which, on the Internet, let’s be real, is the default. And so, one of the moderators changed the pronouns in their nickname to “who/cares”. […] Let’s be real, this isn’t like, calling someone the N-word or something.
a screenshot from their discord:

I’ve never personally had issues with 8x7b refusing requests, but I guess I haven’t really plumbed the depths on what it might agree or disagree to, I have run it through the ordinary gambit (pretend to be public figure x, make dangerous item y, say untrue thing about company z) and it hasn’t given me any problems but sure whatever works for you
Actually not 100% true, you can offload a portion of the model into ram to save VRAM to save money on a crazy gpu and still run a decent model, it just takes a bit longer. I personally can wait a minute for a detailed answer instead of needing it in 5 seconds but of course YMMV
I wouldn’t say it’s easy to get started…
You have to know about open source AI models, then you have to know what fine tuning is, then you have to know where to go to get software that runs the models, and then finally you need to know what models are compatible with both AMD and Nvidia graphics cards.
There’s plenty of open source models that don’t really have any restrictions, you just have to host them yourself (which you can do on your own computer if you have a decent gpu)
for example: mixtral 8x7b
just use koboldcpp or something similar to run the GGUF files and you’re good
You know it’s a good article when a dumb tankie is seething in the comments