• 6 Posts
  • 15 Comments
Joined 29 days ago
cake
Cake day: January 6th, 2026

help-circle
  • Not to be a downer if you’re anti-AI, but you should know a functional, small, 1B parameter model only needs ~85GB of data if the training data set is high quality (the four-year old chinchilla paper set out the 20 to 1 optimization rule for ai training, so it may require even less today).

    That’s basically nothing. If a language has over ~130,000 books or an equivalent amount of writing (1,500 books is about a gig in plain ascii), a functional text-based ai model could be built that uses it.

    My understanding is there are next to zero languages in existence today that do not have this amount of quality text. Certainly, spoken languages that have no written word are not accessible like this, but most endangered languages with few speakers that have a historical written word could in theory have ai models built that effectively communicate in those languages.

    To give you an idea of what this means for less-written languages and a website revolving around them, look at worldcat (which does NOT have anywhere near most of the written text available entirely online for each language listed, it’s JUST a resource for libraries): https://www.oclc.org/en/worldcat/inside-worldcat.html

    But this gets even harder for a theoretical website used to avoid an LLM that can read it, because this is all assuming creating an ai model for language from scratch. That is not necessary today because of transfer learning.

    Major LLM models with over 100 diverse major languages can be fined-tuned on an insignificant amount of data (even 1GB could work in theory) and produce results like those of a 1B parameter model trained solely on one language. This is because the multi-lingual models developed cross-cultural vector-based understandings of Grammer.

    In truth, the only remaining major barriers for any language not understood by fine-tuning an ai model today are both (1) digitization and (2) character recognition. Digitization will vanish as an issue for basically every written language that has a unique script within the next ten years. Character recognition (and more specifically, the economic viability of building the character recognition) will be the only remaining issue.

    Ironically, in creating such a website, you will be creating more data for a future potential ai model to use in training. Especially if whatever you write makes the language of greater economic importance.


  • Iirc:

    The same officer who killed Renee Good stuck his hand in the window of a car driven by a convicted sex offender earlier this year, and refused to let go when the sex offender started driving away, attempting all sorts of nonlethal force like a taser, until the sex offender crashed.

    There is some dispute over whether the officer was truly “stuck” or just held on in order to have greater charges against the convicted sex offender. What is indisputable, is that the officer never attempted to use his gun in that moment, while he did use it on Renee Good.




















  • Venezuelan oil MUST remain off of the world markets by and large in order for the current glut of oil production not to be an economic dead end for oil production companies in the US and elsewhere who overcommitted in a world where EV vehicles are proliferating at a rapid pace.

    Iirc Venezuelas production is down because it’s facilities can’t process as much oil as at peak without multi-billion dollar updates and repairs.

    I think it’s more likely the Trump admin gives US oil companies subsidies to take the oil out despite it being anti-capital and anti-competitive; those two things are his whole MO whenever he thinks capitalism isn’t beneficial for him personally.

    But this whole incident strikes me more as a case of projecting US power than seeking petrol dollars specifically. Like, that’s part of it, and not all of it.