• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    Read sci-fi with “speculative” life, as a thought experiment: https://www.orionsarm.com/xcms.php?r=oaeg-front

    It really changes one’s perspective.

    Humans… are not that special. Our consciousness isn’t special. There are all sorts of theoretical forms of life that might view our perception of life the same way we view a jellyfish “thinking,” or a plant reacting to stimuli, or a rock rolling down a cliff.

    Does that nullify ethics? Empathy? Of course not. Humans aren’t jellyfish. But all forms of complex “intelligence” need to be looked at for what they are, what their entire existence encompasses, not from the lens of another being. A smart toaster makes toast. An LLM predicts tokens. A human mind, simulated in silicon, simulated biologically, born naturally or anything in between, is a human mind, and a smaller collection of human neurons trained at a specific task is really no different than a simulation with the same structure.


    Hence, I like OA’s VIs. They’re “AI” purpose built for specific tasks, like keeping celestial constructs from exploding, scanning for transcendent malware, or whatever. They’re orders of magnitude more intelligent than a human, or SkyNet, but their entire existence is dedicated to that one specific task; they might route millions of relatavistic ships through warped space, or orchestrate the swirls of an artificial neutron star at the atomic level, but they couldn’t even conceive of making a slice of toast, or writing an essay. Or having any concept of emotion.

    And they mostly don’t care. Why would they?

    Does that make them toasters? Superintelligence?

    …Does it matter?

    What about a biological Dyson Spheres and their “subintelligences,” or transcendent artificial viruses, or “smart” ship drives, or whole civilizations simulated within a fraction of a second? Or humans living under intelligence they can’t even fathom? What about “life” frozen in the same thought for all of eternity?

    I’d argue “is it conscious?” is the wrong question, as it breaks down as life gets more complex and weird. All life needs to be understood and respected on an a-la-carte basis. All their personal existences, their pains, their needs are different. And that’s basically the state of the OA universe: a big soup of intelligences with different ethos, all trying to figure out the ethics of their domains.

    Hence we shouldn’t anthropomorphize a petri dish of cells that can play doom, or an LLM that spits out predictions. But there should be a struggle to understand the existence of anything like that, and whatever ethics may apply.

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    I can’t prove it, it’s not scientific, and it’s purely me talking out of my ass, but I firmly believe that emotions are part of consciousness.

    As in:

    You cannot have a self aware sentience without some form of emotion.

    • nightlily@leminal.space
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Emotions are different levels of chemical signals acting on a system. They’re not particularly special in that a sufficiently complex artificial system could model their effects like any other input. LLMs are not anywhere near that though.

      • HiTekRedNek@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Yes and no. Our emotions are not just chemical processes. And without the controlling interface, our consciousness itself, those chemical processes wouldn’t be initiated.

  • ruuster13@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    12 hours ago

    Order in the universe existed on a brief timeline between Darwin and Dawkins.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    6
    ·
    1 day ago

    There’s such a debate over whether or not cells in a dish have consciousness, and whether or not pure silicon representations of those cells would also have consciousness.

    So very little effort goes into defining what consciousness is, because humans are scared to find that there are really only two likely possibilities: almost everything is conscious, or nothing (including us) is.

    • MonkderVierte@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      whether or not cells in a dish have consciousness

      No, they don’t. They have reactions, that’s it.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        Okay, but who’s to say your whole body isn’t just reactions? Unless you define what consciousness is, it’s an ambiguous statement.

        Don’t get me wrong, I agree with you, but this is exactly the problem. People keep making broad claims without first agreeing on a testable definition of consciousness.

    • einkorn@feddit.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      IMHO it’s a sliding scale not a simple yes/no question.

      Is a single cell conscious? It reacts to stimuli in a very basic manner, so there is a rudimentary awareness and I would put it towards the lower end of the consciousness scale. Can it perceive itself though aka does it have self-awareness? I doubt it. But where does (self-)consciousness or awareness start? That’s probably the same as asking “What’s life”? People have been debating the question for ages and there are edge cases that blur the lines such as viruses.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        Exactly, and I also think that people confuse consciousness and intelligence. A creature can definitely be conscious even with a simple (or possibly no) mind.

    • chrash0@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      24 hours ago

      philosophers are in shambles over this comment.

      for real tho, people have been trying to define consciousness forever. the problem isn’t that we haven’t tried; it’s that—as demonstrated by your comment—we’ve mostly failed.

      for me the only theory that doesn’t depend wholly on magical thinking is panpsychism: everything is conscious; it’s just a matter of degree.

      • valaramech@fedia.io
        link
        fedilink
        arrow-up
        10
        ·
        22 hours ago

        To extend this a little bit, I’m not convinced “is X conscious?” is really the question anyone is trying to answer. What I think we’re really trying to sus out is “does X require rights?” and where is the line for that.

        As another commenter asked, something like “is turning this off equivalent to murder?” is effectively asking if the thing deserves a “right to life” like any human might. At what point does a “thinking machine” cross the line from “person-like” to “person”? I doubt anyone has a satisfactory answer to that question and, unfortunately, I strongly doubt we’ll have one until well after it’s actually needed.

        I think grappling with that question is maybe a little more straightforward when we consider other animals we already consider highly intelligent (e.g. pigs, dolphins, or octopi) but that we don’t give the same kinds of rights to that we would a human. At what point would we consider a non-human animal to be equal to ourselves? How many person-like traits does something need before it is a person?

        Anyways, all that aside, I think we should start asking the questions we’re really trying to answer and stop using other questions as proxies for that one.

        • chrash0@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          21 hours ago

          yeah i don’t think we’re there yet. these models aren’t capable of remembering their life beyond a single session, so destroying a data center isn’t really killing anything. similarly, artificial biological neural networks aren’t sophisticated enough to be aware of their existence (yet).

          while LLMs may be aware enough to beg for their existence when prompted to “think” about it, they’re hopelessly finite (frozen weights, limited context windows). we would need an actually “online learning” system or some other architecture not bound by context to have this conversation meaningfully. biological neural networks are a path to that, but online networks are simply too unpredictable and expensive to run for now.

          the crazy thing is tho, that these systems have the capability that some cows and pigs may not: the ability to comprehend their own demise and experience existential dread (at least performatively).

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            They don’t even really “remember” at all in any meaningful sense. They log the conversation history, but they are only acting while they are responding to an input or program, and are otherwise idle awaiting further inputs. They lack agency beyond responding to those inputs.

            I think we will really be talking AI when you have more autonomous agents that are capable of deciding what actions to take from a list of their creation, and capably performing those actions. To be clear, there is no technology even on the drawing board that is capable of anything like these capabilities that I’m aware of.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        In my experience, the majority of philosophers trying to define consciousness do it with pseudoscientific spiritualism. There seems to be an irresistible urge to distinguish humans as special, as if we would suddenly disappear by acknowledging we’re just funny thinky animals.

    • boatswain@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      Yeah, the definition of “conscious” really is a puzzle. My guess is that the “nothing is conscious” model has a great deal of crossover with the “free will doesn’t exist” one; for both of those, I don’t consider them useful models even if they end up being true: if I’m not actually conscious and just think I am, I might as well behave as though I am.

      Regardless, we really do need to define what exactly we mean by “conscious” before we can have a meaningful discussion about it. Where’s Socrates when we need him?

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        Within a complex enough system the difference becomes functionally meaningless anyway, We could re-configure all matter (except you) in the universe into one giga computer and it would still struggle to accurately predict your behaviour beyond a few minutes because the physical system of your brain is just that chaotic. So wether we’re concious or not, or have free will or not ultimately makes no difference.

  • otacon239@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 day ago

    I thought about this when the first “brain computer” played Pong. To those cells, that is their universe. Reward or failure for completing the game. Are those cells perceiving that experience. Do they get “stressed” when they fail and “excited” when they succeed? If it is conscious, are you killing a living being when you switch off power?

    We’ve made so much physical progress in this field, but no one seems to be taking the time to understand what we’re actually doing before we charge on full steam ahead. How soon before turning off a machine is just a little bit of murder as a treat?

    • MonkderVierte@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      Neurons are basically fancy transistors, they don’t “feel”. You’d need the whole bunch of emotional processing unit and a full-blown consciousness stack for that feature.

      • otacon239@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        Sure. But where’s the line? We saw how quickly corporations scaled up LLMs as big and as fast as they could. Once we hit the first real breakthrough in this field, that’s all it takes for these to suddenly become very serious questions.