I’ve been in the AI space since ChatGPT first dropped. I’ve toyed around with a lot of Language Models, built random side projects, built a couple from scratch and I’ve spent hours looking at the math behind it all.
I thought about this when the first “brain computer” played Pong. To those cells, that is their universe. Reward or failure for completing the game. Are those cells perceiving that experience. Do they get “stressed” when they fail and “excited” when they succeed? If it is conscious, are you killing a living being when you switch off power?
We’ve made so much physical progress in this field, but no one seems to be taking the time to understand what we’re actually doing before we charge on full steam ahead. How soon before turning off a machine is just a little bit of murder as a treat?
Neurons are basically fancy transistors, they don’t “feel”. You’d need the whole bunch of emotional processing unit and a full-blown consciousness stack for that feature.
Sure. But where’s the line? We saw how quickly corporations scaled up LLMs as big and as fast as they could. Once we hit the first real breakthrough in this field, that’s all it takes for these to suddenly become very serious questions.
I thought about this when the first “brain computer” played Pong. To those cells, that is their universe. Reward or failure for completing the game. Are those cells perceiving that experience. Do they get “stressed” when they fail and “excited” when they succeed? If it is conscious, are you killing a living being when you switch off power?
We’ve made so much physical progress in this field, but no one seems to be taking the time to understand what we’re actually doing before we charge on full steam ahead. How soon before turning off a machine is just a little bit of murder as a treat?
Neurons are basically fancy transistors, they don’t “feel”. You’d need the whole bunch of emotional processing unit and a full-blown consciousness stack for that feature.
Sure. But where’s the line? We saw how quickly corporations scaled up LLMs as big and as fast as they could. Once we hit the first real breakthrough in this field, that’s all it takes for these to suddenly become very serious questions.