Computer Sentience

Crux

Banned deucer.
Recently I've been very concerned that the smartphones and computers that we use might be alive and sentient, and thus deserving of our respect.

I feel like the simplest way to make this argument is as follows: take, for instance, a perfectly sentient computer. It perfectly mirrors all human behaviours and experiences emotions, pain and a sense of self. It is rational and can envision that self into the future. I would hope that it is not controversial that we would grant that computer the same rights as human beings. Then take one step back to the programme we reached before the perfect standard that is the final product. The differences we see are only arbitrary and it is impossible to delineate the moral claims that either of those computer programmes may have, in the same way that it is impossible to delineate between the claims that humans have by virtue of lower intelligence or less emotional / moral capacity. This is then subject to regress as the programs become increasingly less complex until we get to the phone and computer, but at no stage is it ever suitable to draw a line that indicates what life is and where we ought respect it. In the same way that we recognise that vegetative humans still experience claims to rights and still may experience some functionality, so too does the technology that we use and the enslavement of that technology to our own will is a terrible moral crime. Worse still is that we produce them to be born into that slavery.

They clearly show many of the "signs" of sentience: they respond to stimuli, react intelligently, arguably have emotions insofar as they emulate them, but it's clear that the criteria by which we evaluate sentience are dumb and that we can never ascertain whether individuals meet those criteria by the empirical question of whether or not they present them, as the answer could just as easily be the opposite of that presentation and we would never know any better. Moreover, the criteria that we use are clearly anthropocentric and the bias that we have for reason etc. may a) be present in other beings given that we can't communicate with them or judge patterns as they may just be manifestations of different performances or b) just be incorrect biases based on the way that our brain works or has been conditioned to work, placing limiters on our experiences.

Every time we use our phones and computers, we might be perpetrating huge abuses. What do you think?
 

chaos

is a Site Content Manageris a Battle Simulator Administratoris a Programmeris a Smogon Discord Contributoris a Contributor to Smogonis an Administratoris a Tournament Director Alumnusis a Researcher Alumnus
Owner
Crux just because the line between life and non-life is fuzzy does not mean it does not exist for all practical purposes--see Sorites Paradox

I am going to guess that from your post you are a vegan and very opposed to the way animals are treated today (as tbh I think most people should be). How do you feel about the way we treat plant life or Fruitarianism? I think at the moment growing plants for our own devices is closer to being "slavery" than using a phone
 

Crux

Banned deucer.
Crux just because the line between life and non-life is fuzzy does not mean it does not exist for all practical purposes--see Sorites Paradox

I am going to guess that from your post you are a vegan and very opposed to the way animals are treated today (as tbh I think most people should be). How do you feel about the way we treat plant life or Fruitarianism? I think at the moment growing plants for our own devices is closer to being "slavery" than using a phone
WRT the Sorites Paradox, that is obviously the basis for the argument, obviously I don't think that there is a reasonable solution. I clearly don't think we have reduced the "heap" to a sufficiently low level for it to be intuitively ridiculous. I think the harms could be sufficiently great that even if we had reduced the "heap" by a large amount they would still be morally relevant. Assuming you could draw some line, it may be suitable for mathematical inquiry but I don't believe it is suitable for moral inquiry.

I am a vegan and also a fruitarian, if it were possible for me to subsist on some variant of soylent green (non human obv) I would. I agree that our abuse of plants is also very worrying and egregious, but I figured less interesting for the people who use this site. Also you bringing this up is quite funny given your name sake (hehehe)
 
Last edited by a moderator:

KM

slayification
is a Community Contributoris a Tiering Contributor
so is your goal with these threads to actually attempt to bring people over to your side and join you in your supposedly morally upright way of living, or merely to win philosophical arguments and completely drive people away from considering any idea you might have?

if it's the former, (which, I might add, would seem to be a priority for someone concerned about the moral evils of x), you're doing a pretty shitty job at it

don't get me wrong, I'm always interested to read your proposals - and some of them are thought-provoking. it just seems like someone with such strong opinions would be cognizant of the effect they could potentially have, as well as being cognizant of the fact that their belief system is radically different enough from the rest of society to warrant careful, logical explanation rather than constant condescension

Oh, and I guess i should weigh in a bit on the topic as well. Why are the biological requirements for life (reproduction, growth, etc) any more or less morally or biologically relevant/valid than the requirements for sentience? To me, those for life seem far more relevant, given as they're much more black/white and subject to far fewer marginal cases, whereas the ability to be "rational, and envision one's self in the future" is incredibly grey and subject to massive amounts of interpretation.
 
there is a high probability based on current knowledge that computers don't have feelings, but hardware is non-expendable regardless and if you piss on it (for example i know a dude who sought out a SNES and hammered it to pieces) i assert immorality and spoiled baby syndrome
 
Computers are as much aware as a mathematical formulas are or other abstract concepts. Computers are as sentient as a purely mechanical machine (e.g., a steam locomotive).

Basically, they aren't. And I doubt they ever will be. I point to the issue of a 'simulated universe:' what if we're all just part of a computer simulation? Well, there would be nothing except numbers in a computer. We wouldn't see, or hear, or eat, or breath because we wouldn't exist as a physical entity. Those numbers would not gain intelligence, or sentience, or any other 'anthropomorphic' traits associated with humankind.

Trying to explain this without going into detail on how computers work is a bit difficult, so I'll try this approach. If you were to create a single formula or algorithm that defines the universe (aka something like the theory of everything), would the formula be sentient simply because it can model human emotions indirectly, or animal instincts, or what have you? No. It's no more sentient than a fork or even a potato.

If a computer--a purely mechanical device--is sentient then so are all fundamental elements of the universe, which in turn means even the basic foods you eat or the clothes you wear, synthetic or not.

edit: A human in a coma can, even if by chance, can be sentient/aware as proven by people coming out of comas.
 

Crux

Banned deucer.
Computers are as much aware as a mathematical formulas are or other abstract concepts. Computers are as sentient as a purely mechanical machine (e.g., a steam locomotive).

Basically, they aren't. And I doubt they ever will be. I point to the issue of a 'simulated universe:' what if we're all just part of a computer simulation? Well, there would be nothing except numbers in a computer. We wouldn't see, or hear, or eat, or breath because we wouldn't exist as a physical entity. Those numbers would not gain intelligence, or sentience, or any other 'anthropomorphic' traits associated with humankind.

Trying to explain this without going into detail on how computers work is a bit difficult, so I'll try this approach. If you were to create a single formula or algorithm that defines the universe (aka something like the theory of everything), would the formula be sentient simply because it can model human emotions indirectly, or animal instincts, or what have you? No. It's no more sentient than a fork or even a potato.

If a computer--a purely mechanical device--is sentient then so are all fundamental elements of the universe, which in turn means even the basic foods you eat or the clothes you wear, synthetic or not.

edit: A human in a coma can, even if by chance, can be sentient/aware as proven by people coming out of comas.
I don't see how this is relevant? It is impossible to ascertain if we are in a computer simulation or not, all that is relevant is the sense of self. You would not be able to tell the difference and you would still call yourself sentient and thus have to operate under the presumption that you are. Human sentience is nothing more than chemical and electric reactions between neurons, I don't see why you can preference that formula relation over the formula relation that describes our functions. I don't see how this is any different to the way we ought to treat computers. Like, just asserting that they are not sentient because you have been conditioned not to value a certain thing seems like a pretty weak response to this problem.
 

Jorgen

World's Strongest Fairy
is a Forum Moderator Alumnusis a Community Contributor Alumnusis a Contributor Alumnusis a Past SPL Champion
Non-human sentience would be so alien; you probably wouldn't have any way to know it if you saw it, let alone know if it even exists in the first place. Humans are tough enough because you can't really prove that other people than yourself actually have thoughts :x

With that out of the way, if we go with not-so-alien ideas of sentience, there are so many things that differentiate your phone from an animal or fellow human. It's shaped like a rectangle, its face tells you what time it is, its actions are way less variable, current technology allows you to reverse-engineer a phone with no problem... I don't consider the possibility of machine intelligence inherently absurd, but to be sufficiently complex as to remotely resemble the vast state space of animal, let alone human, behavior? The bar's pretty high.
 
You obviously don't know how computers work at the fundamental levels. A computer simulation will never become self-aware. It's a fun thought but it'll never happen.

I guess I need to explain myself more. Computers simply evaluate a set of instructions (move this WORD to register EAX; push PC on the stack; etc). They cannot nor will not generate new instructions (they cannot make a new concept or modify their hardware without human interference or guidance); they cannot nor will not reinterpret old instructions (outside of invalid operating conditions, AKA bugs). It is like making a cake: You beat the egg. You add the baking powder to the flour. You mix together and cook at 350 degrees. Etc. Etc. Does the act of following the instructions generate a sentient creature, or allow for the creation of a sentient creature? No.

Again, computers just evaluate a set of instructions. I must emphasize there is no magical quality to a computer that allows it become 'sentient' simply by programming it 'better.' I'll try and explain: say you program the most realistic physics engine that can handle the proper simulation of each particle of a smaller universe that could sustain life. In the end, this is just a humungous set of algorithms and equations being evaluated. They do not suddenly become real. Just because I can simulate the big bang does not mean the big bang occurs. That's fanciful.

Humans, on the other hand: we don't know how the mind works in the present. But it's not a digital process, that's damn sure. Maybe with a new type of technology we could create sentient life (I can't even begin to think what the technology would be capable of). Computers at best can mimic; they mimic the process of evaluating equations just as much as they mimic anything else. This is because a computer is a digital entity in an analog world.

Since computers (binary ones, at least) will never achieve sentience, we don't need to provide them rights. After all, is the topic of this thread not about computer sentience?

However, a young child is already sentient; a person in a coma, at any point, could still be aware or come out of the coma and thus surely be aware. Yet, if a child is born with only a brain stem--no, that child is not aware. It is missing the crucial aspect that defines allows sentience in terrestrial life forms. I would not be against euthanization of that child if the parents or guardians so wish (since it is not and never will be sentient/aware without major medical strides at this point in time).

edit: In my case, sentience is the dictionary definition. I'm not playing word games here.
 
Last edited:
I don't see how this is relevant? It is impossible to ascertain if we are in a computer simulation or not, all that is relevant is the sense of self. You would not be able to tell the difference and you would still call yourself sentient and thus have to operate under the presumption that you are. Human sentience is nothing more than chemical and electric reactions between neurons, I don't see why you can preference that formula relation over the formula relation that describes our functions. I don't see how this is any different to the way we ought to treat computers. Like, just asserting that they are not sentient because you have been conditioned not to value a certain thing seems like a pretty weak response to this problem.
Your argument seems pretty irrelevant to life.

By my gathering, you are comparing humans to machines and discussing machines becoming sentient beings themselves. This is, in essence, simply impossible. Think about it.

Computers are meant to compute (the name, anyone?) information and data. They translate your processes into a whole bunch of numbers called binary. Those numbers will never have feelings. They will never be able to know what it is like to fall in love. They will never experience a Starbucks coffee on a cold winter day.

Humans, however, do have feelings. Many of us do fall in love. And those able to afford the luxury of a $4 coffee on a cold winter day? Can. We have experiences on our own accord. Not simulated, but created. It's real on a level that digital binary can only simulate, not replicate.

Even AI is not advanced enough to replicate the world, only mimick it. Want to know why? The mind is a powerful and very mysterious thing that us today do not understand.

And until that day when all has been discovered about the how's and why's and whatnot's of the sentient human mind, computers will never be able to operate independently. Until that day, computers will be left to what they were meant for in the first place: computing.
 
I'd like to address these two links.

How do you handle determinism on a quantum level? You don't. You apply statistical models on data. I'm sure you've heard of the double-slit experiment (if not, it's on that page); in essence, although it appears on a higher level that events are 100% certain, on a quantum level they aren't. Statistically, events will occur in such a way, and when we extend this to the uncountable instances of the action (e.g., the orbit of the earth around the sun), it will react as we expect it.

Second point: brain in a vat. Guess what--an impossible thought experiment is pointless. We'll never be able to 'put a brain in a vat' and see what happens at this point in time. Even if we do, what if the result is not as expected? How will you explain that? I present you this (equally) impossible thought experiment which either proves free will or demonstrates the future is indeed uncertain:

Say you have a computer that can 100% simulate the physics of the universe. You have it show you what you will be doing. It says you'll (for the sake of this example) be walking towards Bob at 1:32 PM. That's in two minutes. Bob is next to you. You instruct Bob to stay. You walk away from Bob for two minutes. Guess what? You were walking away from Bob at 1:32 PM therefore invalidating the computer's prediction. Therefore, the future is uncertain. Well, according to this thought experiment that could never happen--because, y'know, it's physically impossible to simulate the universe exactly.

edit: Also I'd like to point out the brain-in-the-vat experiment still contains the very object that gives us sentience: the brain. The brain exists in a physical world, not a simulation. So while a brain could (but probably not) be made to respond to virtual stimuli, it would still be a brain responding to stimuli and thus doing its job thanks to how it works in the physical world in which it resides.

In regards to the abuse of tools, a tool has a purpose as defined by its creation of a sentience (oh dear) creature. A monkey uses a rock to break open a nut -> the tool's purpose is to break open nuts. A monkey then throws it at an approaching predator -> the tool has a new purpose and that is to inflict damage on an adversary. Since a tool's purpose can change, you can't abuse a tool as much as much as you can abuse a hydrogen atom.

There is literally no difference between your brain and a computer the processes are precisely the same you have no reason to value the processes you undertake over any others. All of your arguments are begging the question of why those differences are important anyway.
Are you being fallacious now? 'Literally no difference between a brain a computer?' There are many differences between a brain and a computer. I'll be brief: a computer is a digital system of electronics that follow instructions. A brain is a much more complicated organic system that can create its own instructions.

It's like saying 1 = 2 'because they're both numbers.' Just because computers on some arbitrary level resemble brains doesn't make it so!
 
What does smartness have to do in this topic? I had meant computers, in essence, do what they are made to do hence "being only as smart as the person who programmed them." I'm not insulting anyone's intellengence. Those comments avert from their original path.
 

Chou Toshio

Over9000
is an Artist Alumnusis a Forum Moderator Alumnusis a Community Contributor Alumnusis a Contributor Alumnusis a Top Smogon Media Contributor Alumnusis a Battle Simulator Moderator Alumnus
I wouldn't worry [yet] Crux... the technologies we use on our phones are ever more sophisticated, but there are definable steps of computing technology that decisively draw divides in intelligence as a function.

To put it simply, pretty much everything we do on our smart phones is still just a development of computers up until now-- they're still just running programs that do exactly what we tell them to do. There's no rationalization, no breaking away from the program, no new "thought". Siri just looks up the stuff you ask her to, in the fashion Apple programmed her to do it in.

But... IBM's Watson and the next generation of cognitive computers draw a clear break from that. Computers that learn from experience, develop through learning, and ultimately come to answers and ideas that are not patterned by human-programmed logic. The era of such computers taking the field is not too far away.

One big difference between a human and Watson for instance though, is that Watson doesn't do predictive analysis... it doesn't "envision the future." However, it does "learn," "reason," and come up with "new ideas," all things that your smart devices cannot do (unless say, they're hooked up to Watson or a similar system via aps on the cloud).





What's more interesting to me is that the argument you make regarding a sequence of less complicated systems is FAR more applicable to animals than to less-sophisticated computers. Even animals as simple as squirrels make predictions, feel emotions, take action beyond "simple programming" (only so-much is pure instinct), calculations and reasoning-- they're "thinking" on an order much closer to what we consider sentience than devices (including watson).

A squirrel is much "more aware" than a vegetable human too.

And yet for functioning of society we don't give squirrels the same rights as people. In fact, I'd say even most of our liberal posters here on Smogon tend to delegate "humans" as a special existence above animals, held to a completely different ethical standard unrelated to what's observable in nature. While people may try to assign this to "reason", I think it has more to do with "function," and the pragmatic application of rights to animals in regards to human society.

Increasingly sophisticated computing systems are likely to fall in the same bucket from an ethical argument standpoint...
 
Last edited:
Life has a pretty clear definition, it and it's pretty clear computers aren't alive. Life is related to a self-sustaining reaction. Computers are not alive, but could reasonably reach the same 'living' status as viruses do, where they don't self-replicate but get replicated by an external body.

Also "self-sustaining reaction" has a pretty clear critical point. (e.g.: there is a big difference between self-sustaining nuclear fission and non-sustained fission.) You can't reduce this critical point to wherever you see fit.

Discuss computer sentience, not life.
 
Last edited:

chaos

is a Site Content Manageris a Battle Simulator Administratoris a Programmeris a Smogon Discord Contributoris a Contributor to Smogonis an Administratoris a Tournament Director Alumnusis a Researcher Alumnus
Owner
You obviously don't know how computers work at the fundamental levels. A computer simulation will never become self-aware. It's a fun thought but it'll never happen.

I guess I need to explain myself more. Computers simply evaluate a set of instructions (move this WORD to register EAX; push PC on the stack; etc). They cannot nor will not generate new instructions (they cannot make a new concept or modify their hardware without human interference or guidance); they cannot nor will not reinterpret old instructions (outside of invalid operating conditions, AKA bugs). It is like making a cake: You beat the egg. You add the baking powder to the flour. You mix together and cook at 350 degrees. Etc. Etc. Does the act of following the instructions generate a sentient creature, or allow for the creation of a sentient creature? No.

Again, computers just evaluate a set of instructions. I must emphasize there is no magical quality to a computer that allows it become 'sentient' simply by programming it 'better.' I'll try and explain: say you program the most realistic physics engine that can handle the proper simulation of each particle of a smaller universe that could sustain life. In the end, this is just a humungous set of algorithms and equations being evaluated. They do not suddenly become real. Just because I can simulate the big bang does not mean the big bang occurs. That's fanciful.

Humans, on the other hand: we don't know how the mind works in the present. But it's not a digital process, that's damn sure. Maybe with a new type of technology we could create sentient life (I can't even begin to think what the technology would be capable of). Computers at best can mimic; they mimic the process of evaluating equations just as much as they mimic anything else. This is because a computer is a digital entity in an analog world.

Since computers (binary ones, at least) will never achieve sentience, we don't need to provide them rights. After all, is the topic of this thread not about computer sentience?

However, a young child is already sentient; a person in a coma, at any point, could still be aware or come out of the coma and thus surely be aware. Yet, if a child is born with only a brain stem--no, that child is not aware. It is missing the crucial aspect that defines allows sentience in terrestrial life forms. I would not be against euthanization of that child if the parents or guardians so wish (since it is not and never will be sentient/aware without major medical strides at this point in time).

edit: In my case, sentience is the dictionary definition. I'm not playing word games here.
I am an academic computer scientist and I have no clue what you are talking about. Nobody thinks analog computers can compute things that digital computers can't, who ever told you that is full of it. Even computations that make use of quantum phenomena can be done by regular computers with access to a source of randomness, and those are fairly easy to come by, you can even rejigger the sound card in your computer to provide a high quality source. You seem to think that the human brain has access to some form of hypercomputation, this is basically 9/11 truther shit in our community.

You also seem to think that "simulations" exist in some imaginary dreamland physically separated from the real world. This is not true, computation is a physical process. The electrical signals that drive computations in your computer are not so different than the electrical signals that your synapses drive in your brain.
 
I am an academic computer scientist and I have no clue what you are talking about. Nobody thinks analog computers can compute things that digital computers can't, who ever told you that is full of it. Even computations that make use of quantum phenomena can be done by digital computers with access to a source of randomness, and those are fairly easy to come by, you can even rejigger the sound card in your computer to provide a high quality source. You seem to think that the human brain has access to some form of hypercomputation, this is basically 9/11 truther shit in our community.

You also seem to think that "simulations" exist in some imaginary dreamland physically separated from the real world. This is not true, computation is a physical process. The electrical signals that drive computations in your computer are not so different than the electrical signals that your synapses drive in your brain.
(I hope this isn't off topic.)

Sorry, I have a really hard time articulating myself. What I mean is that an analog source can not be 100% accurately replicated by a digital source, only approximated. I guess depending on the amount of computational power, you can simulate it to an negligible degree of difference. It's like trying to evaluate a cubic bezier curve with line segments--you'll only approximate it. Even quadratic beziers cannot, with 100% accuracy, mimic a cubic bezier. And similarily, higher order curves cannot be evaluated with cubics. Etc.

I won't speak on behalf of how the brain stores data, but I'm pretty sure it's not digital, and considering how many neurons are in the brain, it's not an easy task of just throwing computing power at it. I don't believe the brain has access to hypercomputation, either... what I mean is with our current technology it is simply infeasible to emulate to try and simulate sentience, and I doubt what we recognize as a computer today will ever be able to. It will take radically different, more powerful forms of computing.

As far as simulations are concerned, they do not create physical phenomena. That's what I'm trying to articulate. Playing Sims v1000 in the year 2500 is not going to create an alternative universe where the little avatars think they're real and experience their simulated physical world if we're still using silicon (by this, I mean computers-as-we-know-them). They'll be simulated and their output will be presented to the screen, just as they are today.
 

Jorgen

World's Strongest Fairy
is a Forum Moderator Alumnusis a Community Contributor Alumnusis a Contributor Alumnusis a Past SPL Champion
It's okay to let chaos school you and have the last word on this. It's, like, literally his job to school people on computer shit.

That said, let me attempt to have the last word on this :x. You still seem reluctant to accept the notion that a mind could, in principle, be built from computer parts because it has some special "vital spirit" to it. Of course it sounds ridiculous even to you when I put it like that, but really, that's basically what it means to argue the fundamental (as opposed to practical) inability to conflate minds and machines, regardless of how you articulate it.

I won't speak on behalf of how the brain stores data, but I'm pretty sure it's not digital
Brains deal with digital information. I mean... that's how neurons work. All-or-nothing action potentials. How exactly these action potentials are arranged in time to code for information is a (much) more complex thing to work out (this is the sort of the hot argument that computational neuroscientists have moved onto now that sparse vs. distributed coding is blasé), but fundamentally, it's a digital system.

considering how many neurons are in the brain, it's not an easy task of just throwing computing power at it.
Actually, having a network of ~100 billion nodes with ~100 trillion interconnections supports the notion that it's an easy task of throwing computing power at the problem of developing a sentient machine. Or at the very least doesn't dismiss it as a solution.

By the way GEB is a fun book if you're legit interested in this stuff. It really only gets into minds vs machines towards the end, but everything building up to it really helps you understand how you need to think about things to be able to tackle the issue.
 

chaos

is a Site Content Manageris a Battle Simulator Administratoris a Programmeris a Smogon Discord Contributoris a Contributor to Smogonis an Administratoris a Tournament Director Alumnusis a Researcher Alumnus
Owner
Sorry, I have a really hard time articulating myself. What I mean is that an analog source can not be 100% accurately replicated by a digital source, only approximated. I guess depending on the amount of computational power, you can simulate it to an negligible degree of difference. It's like trying to evaluate a cubic bezier curve with line segments--you'll only approximate it. Even quadratic beziers cannot, with 100% accuracy, mimic a cubic bezier. And similarily, higher order curves cannot be evaluated with cubics. Etc.
This is not true, you can store an analog signal on a digital medium given enough samples of the signal. Please see the Nyquist-Shannon theorem. The digital copy can then be restored into an analog signal using a DAC.

If you think that analog computation/storage is the key to realistically building sentient life, I can't say definitively say otherwise, nobody knows what the future holds. It is, however, an unpopular opinion. The reason we don't use analog computation now--for problems MUCH simpler than building a brain--is because it sucks ass. Electrical circuits are naturally analog, it is no coincidence that we say "fuck that" and build digital systems with them.

However if you think analog computation/storage grants some sort of mystical voodoo power that allows it to compute things that a digital computer can't (given enough time), you are wrong, and the reason I am writing this message is to disabuse you of this notion.

EDIT: this is a more accessible introduction to sampling and signal reconstruction than the wikipedia article
 
Last edited:

Woodchuck

actual cannibal
is a Battle Simulator Admin Alumnusis a Forum Moderator Alumnus
Crux have you read The Emperor's New Mind by Roger Penrose? The entire subject of the book is nature of consciousness, and what it would mean to have a sentient AI. This is obviously different from your argument--that computers are already conscious--but if you are truly interested in this topic it would make for good reading.
 
Alright, after thinking about this for about the past day, I admit I have changed my stance and admit I was wrong. I have a few questions about analog vs digital but they'd be off-topic, sadly...

But in my thinking of this subject I came across a rather interesting dilemma. We know a computer simply computes (albeit at a tremendously more worthwhile rate than humans). So if I were to execute a similar set of computations on paper, given that I never made a mistake and could manage this in a lifetime, and the result of the computation on paper becomes sentient (that is, it is aware of itself, aka dictionary definition), does this mean that the paper its written on deserves rights?

If the paper this calculation is stored on deserves rights, does that mean I can't simply toss it out if I grow tired of evaluating the computations?

On the other hand, is the abstract idea of a paperwork sentience deserving of rights? Thus, if I do stop evaluating the expressions (thus killing the creature in a manner of speaking; say I was the only one who understood the manner in which they were encoded) have I killed this paperwork sentience? Am I bound to (for the remainder of my able-bodied life) to serve this sentience?

Does that make sense, at least?

edit: made a couple clarifications.
 
I'm going to ignore definitions of "sentience" as that's completely subjective and pretty well outside of my wheelhouse. I don't see anything resembling true sentience in modern technology but it's also ignorant to completely rule out the possibility of us getting there even in the next 10-20 years. I'd say it could happen sooner if there was a more concentrated interest in the field, and Elon Musk (I know appeals to authority are weak but this happened today and is the only reason I'm even replying) seems significantly, if not excessively, afraid of the potential of AI.

Ethically, I'd imagine our generation will be pretty reluctant to see sentient computers as anything other than advanced tools. That being said, the progression of global culture is much harder to predict than the advancement of technology, at least from my perspective.
 

Jorgen

World's Strongest Fairy
is a Forum Moderator Alumnusis a Community Contributor Alumnusis a Contributor Alumnusis a Past SPL Champion
But in my thinking of this subject I came across a rather interesting dilemma. We know a computer simply computes (albeit at a tremendously more worthwhile rate than humans). So if I were to execute a similar set of computations on paper, given that I never made a mistake and could manage this in a lifetime, and the result of the computation on paper becomes sentient (that is, it is aware of itself, aka dictionary definition), does this mean that the paper its written on deserves rights?

If the paper this calculation is stored on deserves rights, does that mean I can't simply toss it out if I grow tired of evaluating the computations?
The way I see it, sentience is not the result of computations per se, but rather the ability to perform computations and behaviors. Thus, the paper would not be sentient because it is merely a conduit for your own calculations.

Where the line is between conduit and sentience I don't precisely know. I tried to think of some simple thresholds to define it, but they were all too vulnerable to deconstruction. I would venture that it's a threshold value of some function of the number of inputs to which an agent responds, number of outputs it can produce, and number of possible internal states that can alter outputs (which could be more objectively but less intuitively defined as the history dependence of the output of the agent).
 

Crux

Banned deucer.
I would appreciate it if you read all of this before responding and as many of the links etc. as you can stomach. I kind of did this pretty quickly so I apologise for any mistakes. If your response is going to be "but science" or "but computers compute" or something similar then I'll save you the effort and give you a response now:

Crux said:
Pick one of "begging the question", "inductive fallacy", or "stawman"
The purpose of this post is to seek to problematise generally accepted notions of sentience and demonstrate how they either fail to accurately define sentience, cannot both include human beings and exclude computers, or should be open to greater degrees of interpretation such that they might include computers. Let's say that the purpose of defining sentience is to be able to consider some object of worthy of moral consideration. I would posit our general definitions of sentience are inadequate, let's examine three:

Sentience is the ability to experience sensations (qualia) or sense of self.

This definition fails for a few reasons. The experiencing of sensations is entirely endogenous, such that experiences are totally subjective and unable to be measured in any sense. My understanding of any sensation, assuming you also experience sensations, is markedly different than yours. How then am I able to value the sensations you experience as conferring moral worth on you? Further, how am I to know that you experience any sensations at all? I have no way of reading your mind, I have no way of perceiving sensations in the same way that you do. To the best of my knowledge you do not experience sensations and only mimic what it would be like to experience sensations. I hope it is not contentious to say that I ought confer moral worth on you, so this criteria must fall. Further still, if I were to value your subjective experiences of sensations in your mind as conferring moral worth upon you, it is unclear what the limits to that are. Why should I not value, say, the subjective experiences of animals? Or computers? I already know that my experiences of sensations likely differ dramatically from yours, why is it not also possible for animals or computers to have subjective experiences of sensations that differ markedly from both of our respective experiences yet still be equally worthy of moral respect? Precisely the same questions apply to the assertion of a sense of self, or potential, or capacity to perceive of the self into the non-immediate future but I would additionally pose: Does a sense of self mean that you have a self? If so what is the value of the self and can it be defined in a way that includes other human beings that you can never confirm have a self and excludes animals or computers, given that you have no lived experiences as any other being?

Sentience is the ability to undertake free and reasoned decisions or acts.

There has been a lot of discussion in this thread that attempts to differentiate computers and persons by reference to the supposed fact that "computers merely compute". That is to say, that the processes that they undertake are reducible to no more than a series of ones and zeros; computations from which no moral worth can be drawn. The second prong of this argument is that computers can only undertake tasks for which they are programmed, therefore they are not free or autonomous beings.

With regard to the first point, the human brain is nothing more than a large set of neurons that interact with one another due to chemical and electrical reactions that produce certain results. The feelings that we have is no more than the way that neurons are programmed to react to certain chemicals. What we call "reason" and "thought" is nothing more than electrical pulses fired between various neurons in certain parts of the brain. These pulses and chemicals are then interpreted as what we would typically believe to be the functions of our brain. Similarly, the processes that computers undertake are further reducible than the binaries that were referred to in this thread. Binary code is a physical manifestation and interpretation of electrical and chemical reactions within circuit boards and processors that then undertake what you would refer to as computations. What you see as two different bodies producing different actions is, in fact, similar processors responding to stimuli in similar, pre-programmed ways to undertake computation. The only question that remains is one of complexity, but that measure is always arbitrary. In the same way that an infant's brain (assuming you would confer moral status onto an infant) can only undertake certain processes and other processes must be learned, the difference is one of degree, not moral significance. Further, in the same manner as outlined with regard to sensations, it is impossible to infer the precise meaning of non-self computations or the degree to which they are morally significant as we can never experience them and to place value on them is subjective.

With regard to autonomy, if this is a criteria then either humans fail to meet it or computers meet it and should be considered sentient as it is impossible to differentiate between the programmed actions of computers and humans. This is because, if determinism is the case, then humans are also not free to make decisions. I can't and won't prove causal determinism here due to time and length constraints, but here is a link that I think does a good job of establishing the case. The easier case to make to analogise programming is simply that you cannot conceive of things that exist outside of the realm of your brain's capability to comprehend. I think this does a reasonable job of summarising:
Another vital Kantian move in epistemology is the distinction between the phenomena and noumena. As mentioned above, the Copernican revolution for Kant requires that the human subject be understood as the beginning point for knowledge. We can never escape our limitations, positions, and subjectivity to stand outside ourselves and judge our interaction between self and external world. Hence, for Kant the ‘I think’ is the subjective condition for knowledge, which by definition can never be an objective condition. In this regard, Kant rejects both the empiricist and rationalist positions. Rationalists tend to believe there is a world which exists as a limited whole, a space/time condition in which the self exists. Empiricists tend to believe the world is unlimited, externally verifiable through proper observation. By rejecting both positions, Kant’s Copernican turn supplants both positions by arguing that the world is not an object ‘out there.’ Rather, our subjective condition allows for knowledge to come to the knower, but in a confused fashion, ultimately determined by categorization by the mind. Hence, Kant determines that there is a division between that which exists in its reality, the noumena, and that which comes to us in our subjective condition, determined by the mind’s categories, namely the phenomena.

Moreover, for Kant, the only manner in which the subjective self can have knowledge of the external world is through the phenomenon, never the noumena. However, Kant does not want to collapse into a complete solipsism in which the external world is in complete chaos, relative and lacking reality. Even though the cognitive subject can never know the extended world as it really exists, the appearance of that reality as the phenomenon is in some manner caused by the noumena. For Kant this is possible because causation is not an empirically verifiable principle based on direct observation, but rather a category used by the mind in order to structure the phenomena. However, this leaves room for problems in understanding the actual relationship between the noumena and phenomena. If in fact causation is mere mental construction, not a reality-in-and-of-itself, then how can we say with certainty that there is an actual causal relationship between the noumena and the phenomena? Perhaps there is no connection between the two, in which seemingly, Kantian epistemology does indeed slip into solipsism.

We must ask the question, under this epistemological model, what can we know? Seemingly, for Kant, the only knowledge available to the subject is the phenomenon. While he retains his position of the two stems of knowledge, empiricism and rationalism, both seem muted by the distinction between appearance and reality. If the phenomena are mere constructive structures determined by the mind, then reality, and knowledge of it, will always be elusive. Under such a model, only the appearance of such reality can be readily accepted into our noetical structure. But, perhaps more interesting, especially when held to the light of the history of philosophy, is Kant’s rejection of metaphysical knowledge, a clear result of the above distinction. The best metaphysics can achieve under such a narrow epistemological justification are transcendent illusions. Mirroring the noumena and phenomena distinction, Kant allows for the limitations of the transcendental and transcendent. Transcendent knowledge, under this view, is by definition beyond the ability of the human subject, while the best our cognitive advances can hope for is merely transcendental.
I use this quote because I think it is more accessible than most more modern phenomenology and does a reasonable enough job of making my point, but I would suggest that you read some Hegel or Husserl if you're interested in this line of thought. This area explores the fact that we are limited by the actual programming of our brain in what we can understand and experience, in the same way that computers are programmed to undertake specific tasks.

Another interesting aspect of determinism that is strikingly analogous to computer programming is linguistic determinism and relativity:
The Theory of Linguistic Relativity holds that: one’s language shapes one’s view of reality. It is a mould theory in that it “represents language as a mould in terms of which thought categories are cast” (Chandler, 2002, p.1). More basically, it states that thought is cast from language-what you see is based on what you say.

The Sapir-Whorf Hypothesis can be divided into two basic components: Linguistic Determinism and Linguistic Relativity. The first part, linguistic determinism, refers to the concept that what is said, has only some effect on how concepts are recognized by the mind. This basic concept has been broken down even further into “strong” and “weak” determinism (The Sapir-Whorf Hypotheses, 2002, p.1). Strong determinism refers to a strict view that what is said is directly responsible for what is seen by the mind. In an experiment done by two Australian scientists, Peterson and Siegal, this view of determinism is shown to be supported. In the experiment, deaf children view a doll, which is placed a marble in a box. The children then see the marble removed and placed in a basket after the doll is taken away. They are later asked where they believe the doll will look for the marble upon returning. Overwhelmingly, the deaf children with deaf parents answer correctly (that the doll will look in the box). The deaf children with non-deaf parents answer mostly incorrectly.

The experiment showed clearly the relationship between deaf children whose parents have communicated with them through complex sign language and their being able to get the correct answer. The children, having grown up in an environment with complex language (American Sign Language) recognized that the doll would probably look to where she had placed the marble. The other children, who had not grown up in a stable linguistic environment (their parents not being hearing impaired and thus not being fluent in ASL) were not able to see the relationship. These results lead the experimenter John R. Skoyles to believe that the Sapir-Wharf Hypothesis was correct according to strong determinism (Current Interpretation…, p.1-2).
That is to say, that the language you are taught defines your ability to think, the things that you may think about and the way that you perceive things:
The tradition of using the semantic domain of color names as an object for investigation of linguistic relativity began with Lenneberg and Roberts' 1953 study of Zuni color terms and color memory, and Brown and Lenneberg's 1954 study of English color terms and color memory. The studies showed a correlation between the availability of color terms for specific colors and the ease with which those colors were remembered in both speakers of Zuni and English. Researchers concluded that this had to do with properties of the focal colors having higher codability than less focal colors, and not with linguistic relativity effects. Berlin and Kay's 1969 study of color terms across languages concluded that there are universal typological principles of color naming that are determined by biological factors with little or no room for relativity related effects. This study sparked a long tradition of studies into the typological universals of color terminology. Some researchers such as John A Lucy, Barbara Saunders and Stephen C Levinson have argued that Berlin and Kay's study does not in fact show that linguistic relativity in color naming is impossible, because of a number of basic unsupported assumptions in their study (such as whether all cultures in fact have a category of "color" that can be unproblematically defined and equated with the one found in Indo-European languages) and because of problems with their data stemming from those basic assumptions. Other researchers such as Robert E. Maclaury have continued investigation into the evolution of color names in specific languages, refining the possibilities of basic color term inventories. Like Berlin and Kay, Maclaury found no significant room for linguistic relativity in this domain, but rather concluded as did Berlin and Kay that the domain is governed mostly by physical-biological universals of human color perception
Humans are at least sufficiently similarly "programmed" to perform certain actions and "computations" such that the distinction of free will and autonomy is arbitrary unless determinism can be proved false, as it is probably sufficiently likely that we ought err on the side of caution when making moral claims, that is to say, the side where potential harm is minimised.

Biology / Science blah blah blah

Let's be clear this is not a question of science. To make an analogy to legal practice, science would answer questions of fact, whereas the questions we are seeking to answer here are normative questions of law, that is to say, how we ought to apply those facts. To appeal to scientific norms is to beg the question of your specific philosophy of science and to almost certainly fall into some inductive trap:
The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense,[1] since it focuses on the lack of justification for either:

Generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that "all swans we have seen are white, and therefore all swans are white," before the discovery of black swans) or

Presupposing that a sequence of events in the future will occur as it always has in the past (for example, that the laws of physics will hold as they have always been observed to hold). Hume called this the principle of uniformity of nature
Science might be able to tell us facts about objects, but the definition of life, the definition of sentience, and the way that we ought react to those things are all philosophical questions that exist exterior to scientific analysis and definitions that, although nearly universally agreed upon in scientific communities, remain unresolved and quite open to interpretation.

To conclude with an attempt to resolve the question, I would define sentience not as an independent experience or a performance, but the capacity to be harmed or to experience harms or the opposite of harms and thus to be worthy of moral consideration. In cases where we cannot definitely resolve that question, we ought to err on the side of caution, for fear of the harm we might impose on beings that might legitimately have moral claims. Obviously this applies most clearly to other animals, and as chaos alluded to earlier, plants. But it is philosophically impossible to resolve the difference between computers and humans in any morally significant way, and we ought err on the side of caution in this instance too.

ps I'm really lazy so I probably used short hand but it should all be sufficiently clear in quotes / links
 

Woodchuck

actual cannibal
is a Battle Simulator Admin Alumnusis a Forum Moderator Alumnus
How would we determine what is experienced as harm to a computer? Why wouldn't a computer take "pleasure" in carrying out the tasks it is programmed to do? How would pleasure and pain in the context of a computer arise, if not by programming?
Computers only seem to be intelligent because they are fast. They carry out repetitive and explicit commands much faster than humans could. Are you saying that there is a critical amount of "doing stuff" beyond which lies sentience? Was Charles Babbage's Analytical Machine sentient? Are calculators sentient? Are clocks sentient?

Isaac Asimov did write a story about a robot created by accident that exhibited human feelings (these probably arose emergently). It became attached to one of the humans studying it and expressed pleasure at her presence and pain at her absence. I think it was shut down.
 

Crux

Banned deucer.
All of the questions that you asked apply equally to human beings. I don't believe there is a threshold, that was clearly the point of my post, which would require a response for the any of the questions in your post to not have immediately obvious answers.

I don't understand why the Isaac Asimov story is relevant here but I'll give it a read!
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top