• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • Not an expert on this topic but I’ve read about it a fair bit and tinkered around with image generators:

    You don’t post them, basically. Unfortunately nothing else will really work in the long term.

    There are various tools – Glaze is the first one I can think of – that try to subtly modify the pixels in the image in a way that is imperceptible to humans but causes the computer vision part of image generator AIs (the part that, during the training process, looks at an image and produces a text description of what is in it) to freak out and become unable to understand what is in the image. This is known as an adversarial attack in the literature.

    The intention of these tools is to make it harder to use the images for training AI models, but there are several caveats:

    • Though they try to be visually undetectable to humans, they can still create obviously visible artifacts, especially on higher strength levels. This is especially noticeable on hand-drawn illustrations, less so on photographs.
    • Lower strength levels with fewer artifacts are less effective.
    • They can only target existing models, and even then won’t be equally effective against all of them.
    • There are ways of mitigating or removing the effect, and it will likely not work on future AI models (preventing adversarial attacks is a major research interest in the field).

    So the main thing you gain from using these is that it becomes harder for people to use your art for style transfer/fine-tuning purposes to copy your specific art style right now. The protection has an inherent time limit in it because it relies on a flaw in the AI models, which will be fixed in the future. Other abusable flaws will almost certainly remain and be discovered after the ones currently used are fixed, but the art you release now obviously cannot be protected by techniques that do not yet exist. It will be a cat-and-mouse game, and one where the protection systems play the role of the cat.

    Anyway, if you want to try it, you can find the aforementioned Glaze at https://glaze.cs.uchicago.edu/. You may want to read one of their recent updates, which discusses at greater length the specific issue I bring up here, i.e. the AI models overcoming the adversarial attack and rendering the protection ineffective, and how they updated the protection to mitigate this: https://glaze.cs.uchicago.edu/update21.html


  • I mean, the number of logical qubits has gone from basically zero not too long ago to what it is now. The whole error correction thing has really only taken off in the past ~5 years. That Microsoft computer you mentioned that got 4 logical qubits out of 30 physical qubits represents a 3-fold increase over the apparently previous best of 12 logical qubits to 288 physical ones (published earlier the same year), which undoubtedly was a big improvement over whatever they had before.

    And then the question is FOR WHAT? Dead people cant make use of quantum computers and dead people is what we will be if we dont figure out solutions to some much more imminent, catastrophic problems in the next 10 years.

    Strange thing to say. There’s enough people on the planet to work on more than one problem at a time. Useful quantum computing will probably help solve many problems in the future too.


  • Even if it’s 8 physical qubits to 1 logical qubit, 6100 qubits would get you 762 logical cubits.

    All I’m saying is that the technology seems to be on a trajectory of the number of qubits improving by an order of magnitude every few years, and as such it’s plausible that in another 5-10 years it could have the necessary thousands of logical qubits to start doing useful computations. Mere 5 years ago the most physical qubits in a quantum computer was still measured in the tens rather than the hundreds, and 10 years ago I’m pretty sure they hadn’t even broken ten.