Computer Sentience

Woodchuck

actual cannibal
is a Battle Simulator Admin Alumnusis a Forum Moderator Alumnus
I know I'm making a slippery-slope argument, but if there isn't a threshold then what would make a computer sentient and a clock not? Or are you saying that computers and clocks are equally likely to be sentient? The only philosophical difference between a computer and a clock (a mechanical one, like a pocketwatch) is that computers perform more calculations and do them more quickly than a clock does. I am willing to say that the clock comparison applies equally to a human being if you discard our subjective experiences. Are you willing to make the statement that clocks are sentient?
 

Crux

Banned deucer.
No. I am willing to make the same statement I have made about computers, animals, and human beings. It is plausible that they might be sentient and it is something we should examine.
 

Myzozoa

to find better ways to say what nobody says
is a Top Tiering Contributor Alumnusis a Past WCoP Champion
i feel like the some part of my resistance to your argument crux, would have to do with like, do we really assign moral claims to anything? like, is there any evidence, in the behavior of humans, that humans are assessed by other humans as having moral claims/rights? I guess what I'm saying is that the world is so evil and bad, and humans are so bad and evil, that there is no reason to think that if computers were found to be sentient in identical ways as humans, that humans would treat them differently. I can therefore plausibly 'deny' your argument on the basis that I do not think that it is normal to attach moral claims to humans that we would not attach to calculators. Thus, human behavior IS consistent with a lack of a distinction between sentience and non-sentience, so the argument really only gets people who believe, out of some innocence, that humans in fact DO (and ought to) attach moral claims to other humans that they do not assign to calculators.

#drunkposts

consider the persistence of slavery, or the fundamental slave-master relationship which persists in the absence of legal (i.e conceived from classic liberal understandings of freedom) slavery.
 
I know I'm making a slippery-slope argument, but if there isn't a threshold then what would make a computer sentient and a clock not? Or are you saying that computers and clocks are equally likely to be sentient? The only philosophical difference between a computer and a clock (a mechanical one, like a pocketwatch) is that computers perform more calculations and do them more quickly than a clock does. I am willing to say that the clock comparison applies equally to a human being if you discard our subjective experiences. Are you willing to make the statement that clocks are sentient?
I wouldn't say that's a slippery slope - it's a valid pile of sand argument.

If sentience is a property obtained by crossing a (volume) threshold of some kind, then the choice of where that threshold lies can essentially be arbitrary.

See, for example, the successive redefinitions we've used to characterise animals as different to us (ie. Not selfaware or sentient) when people keep pointing out that birds have theory of mind and babies don't, etc.
 
Computer sentience? That reminds me of this movie that came out last year about a guy who falls in love with a computer operating system. It's called Her, directed by Spike Jonz, if any of you are interested. I haven't seen it. (Scarlett Johannson voices the computer, for the record.)

There a couple of musings I have on this theory. Allow me to enumerate them.

  1. The origins of emotional sentience. What makes people emotional? Scientifically speaking, surges of various hormones have been found to correspond to different emotions people experience, i.e. oxytocin levels peaking when one views a photo of another person they feel an attachment to. This suggests that the onset of hormones produce certain emotional responses via the brain's processing of said information. Computers are void of the presence of hormones or any system to process and react to hormones, rendering the very proposition that computers could be sentient very... flimsy. Unless, of course, the system actually works backwards: emotions somehow evoking hormones. But that is about as logical as saying that Darwinian evolution works backwards, as in, the finch's beak determining the bug, not vice versa.
  2. The separation of the ability to 'think' versus the ability to 'feel'. Processing information and yielding a computational response, most would agree, is not the same as developing an emotional association with something. It's a difference of regurgitating a predetermined response, much like computer programming-- if x = true, then y = also true; if x is not true, then y = also not true-- and forming a deep, intimately orchestrated opinion on something. Emotion is circumstantially dependent on a variety of situational and external factors; it is too complex to be decided by a preset equation. At least, that's how I interpret emotion. So, something having the ability to respond to stimulus in a way it has been programmed to do so is not the same as being capable of sentience, in my opinion.
I'm no expert, but that's just my two cents.

Edit: Here's a link to an article about artificial life being created by digitizing the brain of a worm. It's pretty interesting. Thought it might be relevant.
 
Last edited:
Thought is essentially just electrical signals going off in a super complex network in your brain. I don't see why computers would be any different, because they technically do think by those standards. They may not be "alive", but they could certainly become "sentient".

In this thread, I've seen the argument that computers only take input and follow a set process, whereas humans can create their own processes and make their own decisions. I fail to understand this argument. After all, don't we only take inputs and follow processes? We react to some external stimulus and behave according to experience. Yes, we can create new processes, but only based on our past experiences and as a result of OTHER processes - all things a sufficiently advanced AI could do (though it would take a lot of programming). And this magical "decision-making" power that some claim we have makes no sense to me either. After all, although we do have "free will" in that it is our brains that make the decisions, it's not like they were ever going to make a DIFFERENT decision. The stimuli and factors were all set up in a certain way that making that decision was completely inevitable. We can "what-if" all we want, but we should ask ourselves, in these hypothetical scenarios, what changed to cause these decisions to alter? And what changed to cause that, and so on? All in all, the thought processes of humans bear no intrinsic difference to those of AI bar emotions, which can be simulated (albeit with a huge amount of effort).
 
It's funny this thread was bumped. When I made any arguments, I was not in a rational state of mind, all things considered. I see I realized I was wrong, good thing. I still hold I'm wrong, and don't even understand why I had argued otherwise at this moment (especially because as far as I know, I've never held human intelligence/sentience/consciousness being unique when I was lucid otherwise). Goes to show how flawed the human mind is, especially when its provider, the brain (much as any organ), is damaged.

In regards to the moral issues of terminating sentient life (the purpose of this discussion, I suppose), I still hold my question about the paperwork sentience wasn't satisfactorily answered. Personally, I would have no qualms terminating (i.e., discontinuing performing the necessary calculations, not disposing of existing calculations and state; the latter would be spiteful and purposeless) the paperwork sentience. In this case, as long as the calculations can be resumed, it's more akin to putting the sentience in some suspended state of being. If the sentience doesn't want to be suspended, however, I would hold I (nor anyone else) cannot be expected to serve it by any other means than personal choice, much like I cannot expect another being to ensure my survival by force.

This is contrary to organic/naturally occurring beings. Humans can't be suspended; on death, sentience is lost permanently. We cannot copy or resume said sentience. However, this is not relevant for a computer-based sentience; the software or hardware could be replicated identically with current technology. If my body could be replaced after destruction, while my mind remains intact, there is no immediate immoral implication (unless transfer of mind/destruction of body is forced upon me, violating personal autonomy, which is a different moral issue).
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top