I would appreciate it if you read all of this before responding and as many of the links etc. as you can stomach. I kind of did this pretty quickly so I apologise for any mistakes. If your response is going to be "but science" or "but computers compute" or something similar then I'll save you the effort and give you a response now:
Crux said:
Pick one of "begging the question", "inductive fallacy", or "stawman"
The purpose of this post is to seek to problematise generally accepted notions of sentience and demonstrate how they either fail to accurately define sentience, cannot both include human beings and exclude computers, or should be open to greater degrees of interpretation such that they might include computers. Let's say that the purpose of defining sentience is to be able to consider some object of worthy of moral consideration. I would posit our general definitions of sentience are inadequate, let's examine three:
Sentience is the ability to experience sensations (qualia) or sense of self.
This definition fails for a few reasons. The experiencing of sensations is entirely endogenous, such that experiences are totally subjective and unable to be measured in any sense. My understanding of any sensation, assuming you also experience sensations, is markedly different than yours. How then am I able to value the sensations you experience as conferring moral worth on you? Further, how am I to know that you experience any sensations at all? I have no way of reading your mind, I have no way of perceiving sensations in the same way that you do. To the best of my knowledge you do not experience sensations and only mimic what it would be like to experience sensations. I hope it is not contentious to say that I ought confer moral worth on you, so this criteria must fall. Further still, if I were to value your subjective experiences of sensations in your mind as conferring moral worth upon you, it is unclear what the limits to that are. Why should I not value, say, the subjective experiences of animals? Or computers? I already know that my experiences of sensations likely differ dramatically from yours, why is it not also possible for animals or computers to have subjective experiences of sensations that differ markedly from both of our respective experiences yet still be equally worthy of moral respect? Precisely the same questions apply to the assertion of a sense of self, or potential, or capacity to perceive of the self into the non-immediate future but I would additionally pose: Does a sense of self mean that you have a self? If so what is the value of the self and can it be defined in a way that includes other human beings that you can never confirm have a self and excludes animals or computers, given that you have no lived experiences as any other being?
Sentience is the ability to undertake free and reasoned decisions or acts.
There has been a lot of discussion in this thread that attempts to differentiate computers and persons by reference to the supposed fact that "computers merely compute". That is to say, that the processes that they undertake are reducible to no more than a series of ones and zeros; computations from which no moral worth can be drawn. The second prong of this argument is that computers can only undertake tasks for which they are programmed, therefore they are not free or autonomous beings.
With regard to the first point, the human brain is nothing more than a large set of neurons that interact with one another due to chemical and electrical reactions that produce certain results. The feelings that we have is no more than the way that neurons are programmed to react to certain chemicals. What we call "reason" and "thought" is nothing more than electrical pulses fired between various neurons in certain parts of the brain. These pulses and chemicals are then interpreted as what we would typically believe to be the functions of our brain. Similarly, the processes that computers undertake are further reducible than the binaries that were referred to in this thread. Binary code is a physical manifestation and interpretation of electrical and chemical reactions within circuit boards and processors that then undertake what you would refer to as computations. What you see as two different bodies producing different actions is, in fact, similar processors responding to stimuli in similar, pre-programmed ways to undertake computation. The only question that remains is one of complexity, but that measure is always arbitrary. In the same way that an infant's brain (assuming you would confer moral status onto an infant) can only undertake certain processes and other processes must be learned, the difference is one of degree, not moral significance. Further, in the same manner as outlined with regard to sensations, it is impossible to infer the precise meaning of non-self computations or the degree to which they are morally significant as we can never experience them and to place value on them is subjective.
With regard to autonomy, if this is a criteria then either humans fail to meet it or computers meet it and should be considered sentient as it is impossible to differentiate between the programmed actions of computers and humans. This is because, if determinism is the case, then humans are also not free to make decisions. I can't and won't prove causal determinism here due to time and length constraints, but
here is a link that I think does a good job of establishing the case. The easier case to make to analogise programming is simply that you cannot conceive of things that exist outside of the realm of your brain's capability to comprehend.
I think this does a reasonable job of summarising:
Another vital Kantian move in epistemology is the distinction between the phenomena and noumena. As mentioned above, the Copernican revolution for Kant requires that the human subject be understood as the beginning point for knowledge. We can never escape our limitations, positions, and subjectivity to stand outside ourselves and judge our interaction between self and external world. Hence, for Kant the ‘I think’ is the subjective condition for knowledge, which by definition can never be an objective condition. In this regard, Kant rejects both the empiricist and rationalist positions. Rationalists tend to believe there is a world which exists as a limited whole, a space/time condition in which the self exists. Empiricists tend to believe the world is unlimited, externally verifiable through proper observation. By rejecting both positions, Kant’s Copernican turn supplants both positions by arguing that the world is not an object ‘out there.’ Rather, our subjective condition allows for knowledge to come to the knower, but in a confused fashion, ultimately determined by categorization by the mind. Hence, Kant determines that there is a division between that which exists in its reality, the noumena, and that which comes to us in our subjective condition, determined by the mind’s categories, namely the phenomena.
Moreover, for Kant, the only manner in which the subjective self can have knowledge of the external world is through the phenomenon, never the noumena. However, Kant does not want to collapse into a complete solipsism in which the external world is in complete chaos, relative and lacking reality. Even though the cognitive subject can never know the extended world as it really exists, the appearance of that reality as the phenomenon is in some manner caused by the noumena. For Kant this is possible because causation is not an empirically verifiable principle based on direct observation, but rather a category used by the mind in order to structure the phenomena. However, this leaves room for problems in understanding the actual relationship between the noumena and phenomena. If in fact causation is mere mental construction, not a reality-in-and-of-itself, then how can we say with certainty that there is an actual causal relationship between the noumena and the phenomena? Perhaps there is no connection between the two, in which seemingly, Kantian epistemology does indeed slip into solipsism.
We must ask the question, under this epistemological model, what can we know? Seemingly, for Kant, the only knowledge available to the subject is the phenomenon. While he retains his position of the two stems of knowledge, empiricism and rationalism, both seem muted by the distinction between appearance and reality. If the phenomena are mere constructive structures determined by the mind, then reality, and knowledge of it, will always be elusive. Under such a model, only the appearance of such reality can be readily accepted into our noetical structure. But, perhaps more interesting, especially when held to the light of the history of philosophy, is Kant’s rejection of metaphysical knowledge, a clear result of the above distinction. The best metaphysics can achieve under such a narrow epistemological justification are transcendent illusions. Mirroring the noumena and phenomena distinction, Kant allows for the limitations of the transcendental and transcendent. Transcendent knowledge, under this view, is by definition beyond the ability of the human subject, while the best our cognitive advances can hope for is merely transcendental.
I use this quote because I think it is more accessible than most more modern phenomenology and does a reasonable enough job of making my point, but I would suggest that you read some Hegel or Husserl if you're interested in this line of thought. This area explores the fact that we are limited by the actual programming of our brain in what we can understand and experience, in the same way that computers are programmed to undertake specific tasks.
Another interesting aspect of determinism that is strikingly analogous to computer programming is
linguistic determinism and relativity:
The Theory of Linguistic Relativity holds that: one’s language shapes one’s view of reality. It is a mould theory in that it “represents language as a mould in terms of which thought categories are cast” (Chandler, 2002, p.1). More basically, it states that thought is cast from language-what you see is based on what you say.
The Sapir-Whorf Hypothesis can be divided into two basic components: Linguistic Determinism and Linguistic Relativity. The first part, linguistic determinism, refers to the concept that what is said, has only some effect on how concepts are recognized by the mind. This basic concept has been broken down even further into “strong” and “weak” determinism (The Sapir-Whorf Hypotheses, 2002, p.1). Strong determinism refers to a strict view that what is said is directly responsible for what is seen by the mind. In an experiment done by two Australian scientists, Peterson and Siegal, this view of determinism is shown to be supported. In the experiment, deaf children view a doll, which is placed a marble in a box. The children then see the marble removed and placed in a basket after the doll is taken away. They are later asked where they believe the doll will look for the marble upon returning. Overwhelmingly, the deaf children with deaf parents answer correctly (that the doll will look in the box). The deaf children with non-deaf parents answer mostly incorrectly.
The experiment showed clearly the relationship between deaf children whose parents have communicated with them through complex sign language and their being able to get the correct answer. The children, having grown up in an environment with complex language (American Sign Language) recognized that the doll would probably look to where she had placed the marble. The other children, who had not grown up in a stable linguistic environment (their parents not being hearing impaired and thus not being fluent in ASL) were not able to see the relationship. These results lead the experimenter John R. Skoyles to believe that the Sapir-Wharf Hypothesis was correct according to strong determinism (Current Interpretation…, p.1-2).
That is to say, that the language you are taught defines your ability to think, the things that you may think about and
the way that you perceive things:
The tradition of using the semantic domain of color names as an object for investigation of linguistic relativity began with Lenneberg and Roberts' 1953 study of Zuni color terms and color memory, and Brown and Lenneberg's 1954 study of English color terms and color memory. The studies showed a correlation between the availability of color terms for specific colors and the ease with which those colors were remembered in both speakers of Zuni and English. Researchers concluded that this had to do with properties of the focal colors having higher codability than less focal colors, and not with linguistic relativity effects. Berlin and Kay's 1969 study of color terms across languages concluded that there are universal typological principles of color naming that are determined by biological factors with little or no room for relativity related effects. This study sparked a long tradition of studies into the typological universals of color terminology. Some researchers such as John A Lucy, Barbara Saunders and Stephen C Levinson have argued that Berlin and Kay's study does not in fact show that linguistic relativity in color naming is impossible, because of a number of basic unsupported assumptions in their study (such as whether all cultures in fact have a category of "color" that can be unproblematically defined and equated with the one found in Indo-European languages) and because of problems with their data stemming from those basic assumptions. Other researchers such as Robert E. Maclaury have continued investigation into the evolution of color names in specific languages, refining the possibilities of basic color term inventories. Like Berlin and Kay, Maclaury found no significant room for linguistic relativity in this domain, but rather concluded as did Berlin and Kay that the domain is governed mostly by physical-biological universals of human color perception
Humans are at least sufficiently similarly "programmed" to perform certain actions and "computations" such that the distinction of free will and autonomy is arbitrary unless determinism can be proved false, as it is probably sufficiently likely that we ought err on the side of caution when making moral claims, that is to say, the side where potential harm is minimised.
Biology / Science blah blah blah
Let's be clear this is not a question of science. To make an analogy to legal practice, science would answer questions of fact, whereas the questions we are seeking to answer here are normative questions of law, that is to say, how we ought to apply those facts. To appeal to scientific norms is to beg the question of your
specific philosophy of science and to almost certainly fall into some
inductive trap:
The problem of induction is the philosophical question of whether inductive reasoning leads to knowledge understood in the classic philosophical sense,[1] since it focuses on the lack of justification for either:
Generalizing about the properties of a class of objects based on some number of observations of particular instances of that class (for example, the inference that "all swans we have seen are white, and therefore all swans are white," before the discovery of black swans) or
Presupposing that a sequence of events in the future will occur as it always has in the past (for example, that the laws of physics will hold as they have always been observed to hold). Hume called this the principle of uniformity of nature
Science might be able to tell us facts about objects, but the definition of life, the definition of sentience, and the way that we ought react to those things are all philosophical questions that exist exterior to scientific analysis and definitions that, although nearly universally agreed upon in scientific communities, remain unresolved and quite open to interpretation.
To conclude with an attempt to resolve the question, I would define sentience not as an independent experience or a performance, but the capacity to be harmed or to experience harms or the opposite of harms and thus to be worthy of moral consideration. In cases where we cannot definitely resolve that question, we ought to err on the side of caution, for fear of the harm we might impose on beings that might legitimately have moral claims. Obviously this applies most clearly to other animals, and as chaos alluded to earlier, plants. But it is philosophically impossible to resolve the difference between computers and humans in any morally significant way, and we ought err on the side of caution in this instance too.
ps I'm really lazy so I probably used short hand but it should all be sufficiently clear in quotes / links