

I mean, the number of logical qubits has gone from basically zero not too long ago to what it is now. The whole error correction thing has really only taken off in the past ~5 years. That Microsoft computer you mentioned that got 4 logical qubits out of 30 physical qubits represents a 3-fold increase over the apparently previous best of 12 logical qubits to 288 physical ones (published earlier the same year), which undoubtedly was a big improvement over whatever they had before.
And then the question is FOR WHAT? Dead people cant make use of quantum computers and dead people is what we will be if we dont figure out solutions to some much more imminent, catastrophic problems in the next 10 years.
Strange thing to say. There’s enough people on the planet to work on more than one problem at a time. Useful quantum computing will probably help solve many problems in the future too.


Not an expert on this topic but I’ve read about it a fair bit and tinkered around with image generators:
You don’t post them, basically. Unfortunately nothing else will really work in the long term.
There are various tools – Glaze is the first one I can think of – that try to subtly modify the pixels in the image in a way that is imperceptible to humans but causes the computer vision part of image generator AIs (the part that, during the training process, looks at an image and produces a text description of what is in it) to freak out and become unable to understand what is in the image. This is known as an adversarial attack in the literature.
The intention of these tools is to make it harder to use the images for training AI models, but there are several caveats:
So the main thing you gain from using these is that it becomes harder for people to use your art for style transfer/fine-tuning purposes to copy your specific art style right now. The protection has an inherent time limit in it because it relies on a flaw in the AI models, which will be fixed in the future. Other abusable flaws will almost certainly remain and be discovered after the ones currently used are fixed, but the art you release now obviously cannot be protected by techniques that do not yet exist. It will be a cat-and-mouse game, and one where the protection systems play the role of the cat.
Anyway, if you want to try it, you can find the aforementioned Glaze at https://glaze.cs.uchicago.edu/. You may want to read one of their recent updates, which discusses at greater length the specific issue I bring up here, i.e. the AI models overcoming the adversarial attack and rendering the protection ineffective, and how they updated the protection to mitigate this: https://glaze.cs.uchicago.edu/update21.html