Different rational beings can like and dislike different things. For example, I typically do not care if I am lied to or not, because it would imply that I "trust" people. On the contrary, I believe that facts should be acquired only in ways that guarantee accurate knowledge and thus that others should be interacted with in such a way that the truth value of what they say matters as little as possible. Thus, ideally, every fact should be distrusted unless it is confirmed by at least two independent rational agents, confirmed by an agent subject to a lie detector, or very simply confirmed by a tool which can be rationally understood to be trustworthy. In a system which minimizes trust (as should any truly robust system), lying is hardly immoral and liars are nothing more than slightly defective.
Furthermore, it is perfectly possible to imagine a rational agent who can only be happy when he or she sees other rational agents suffer. A society of such agents would be impossible to sustain and no meaningful morality would be derived for them. What you mean by "rational agent" is hardly a well-defined concept. At the core, a rational agent is an agent that can act in such a way that it maximizes its own happiness, regardless of what makes him or her happy. An irrational agent would be one that acts incompetently with respect to his or her own happiness (and thus hurts himself). A society of rational agents would thus only function properly if the happiness of each was reconcilable to some extent - but even if that was the case, there is no reason to assume that they are all the same. Kant's mistake here is to assume that there exists some canonical rational agent, which is a thoroughly unsubstantiated assertion - an unwarranted generalization of his own rationality.
Secondly, basing your moral system on what rational agents would want can only make sense if society was composed of rational agents. Human society is not composed of rational agents. Far from it. Therefore, it makes no sense for morality to have anything to do with what rational agents would or wouldn't like.
On the other hand, we have solid evidence that some humans are not rational in the same way as other humans. Many humans are not rational, neither in actuality nor even in possibility. I am fairly sure that the most rational chimpanzee on the planet is more rational than the least rational human on the planet, notwithstanding mental illness. And then I've got to wonder what your criterion is to determine whether an entity is rational or not and what rationality has to do with this in the first place.
The idea is that (the rational part) a rational person desires the truth, and to lie is to deny this. We know that we are rational beings, irrespective of whatever degree this is evident. Kant's proposal is that being rational (to whatever degree we show this), we should not commit actions to others which a rational being (a theoretical one) would not want done to itself. Not infallable by any means, but it has a certain degree of valdity.
You are outnumbered, though.
What's your political theory? Just curious.
I'll post that later, I indended to answer this last, and replying to everthing else has left me quite drained. In a good way of course, since I rarely get the opertunity to have this kind of discussion.
No action "does not affect" principles. Every action would probably affect it to varying extents. A little, a lot, anything in-between. It would be simplistic to think otherwise. Hence, a morally neutral outcome can only be defined meaningfully not as one which would not affect the principles, but in fact as one which would not affect the principles more than X, where X is a predetermined threshold. It is that threshold which is problematic. Just like you can't pinpoint a moment where a small mound of sand becomes medium-sized or becomes large, you can't rationally determine a threshold between good and evil or between either and neutrality.
There's some irony here in the fact that I'm the one who is calling you out, here, for having theories that are way too simple, i.e. for unwittingly applying Occam's razor in a situation where it can't be applied. Because even though you don't realize it, that's exactly what you do. Relative morality is richer and much more complex than absolute morality. And as it stands, it is a much more accurate portrayal of what morality is and a much more powerful modelization of what it could be. To reject relative morality in favor of absolute morality is thus what I would call an undue application of Occam's razor: the former is more complex, but unlike the latter, it can represent, quantify and analyze the inherent uncertainty which exists in morality. Absolute morality is pretty lazy in comparison.
There are two relevant situations here. Firstly, when the consequences of the action are insufficiently known to give a valid answer, in which case neutral works fine. The other case is more an order of magnitude thing. Is it reasonble to steal a loaf of bread to give to a child? How about if it will feed 10 children? Or 100? Or all children? Or all children forever (thats a pretty impressive loaf of bread, but we are speaking hypthetically here)? Say after summing up the negative and positive consequences, you find that stealing to feed one is definately immoral, and stealing to feed all forever is definately moral. You are however unsure about feeding 10. You cannot pinpoint the transition between, neutral, moral and immoral, but you can say, for a given number, whether it is positive, negative, or if you are unsure, in which case the action would be neutral.
What intests me though is how relative morality could solve the same problem any better? What does it matter what society thinks about your actions?
The problem that you present is largely unspecified. The life of your comrade is arguably worth much less than humanity at large - a billion times less, if you wanted to quantify that. Even if you don't know the relevant probabilities, it is reasonable to assume that your comrade's life does not warrant the risk and thus that you should kill him anyway. You also kind of evaded my actual point, which is that you don't know how to act around the threshold. Uncertainty comes in degrees. At which degree of uncertainty, in your example, would the moral decision change? Any theory of morality must incorporate a certain dose of risk aversion - but how much? More importantly even, how do you compare the evil of killing your comrade to the evil of the Evil Empire getting their way?
Killing the comrade robs him of his life, while allowing him to betray the Rebellion results in the will of humanity being crushed forever. The suppression of the will is why I could accept killing him. If killing him would save lives, that is also legitimate, but I would have to have a high degree of certainty that a large number of lives would be lost.
Here's another hairy moral problem: would you be willing to kill every single criminal on the planet, right now, in one fell swoop? Notwithstanding the possibility that non-criminals eventually take their places, you would arguably make the world better at the cost of eliminating a large chunk of it. Or is eliminating these people an evil in itself? What would you do with a mass murderer who also happens to be the only human being capable of finding a cure for cancer (and willing to)? Would it be acceptable to torture the smartest human who ever existed in order to force him or her to find a cure for all diseases, should he or she refuse to cooperate willingly and should there be sufficient evidence that nobody else could do it?
I don't think there is any conclusive answer to these dilemmas and I don't think there could be either. There exist very rational arguments that go either way.
Their might not be a common answer, but conclusive answers can be reached from a rational system.
In mine, for instance:
Yes, provided that the severity of their crimes matched certain qualifications, but I don't think you are talking about pick-pockets either. They have freely elected to do what they know is evil, are the direct cause of evil, and their elimination would remove much evil. However, I would not do so more than once, as this would realistically take away the abilitiy to choose to be immoral, and hence the ability to be moral.
Offer a compromise: there are not enough crimials able to achieve such feats to lessen the deterrant.
Yes. Taking something by force, even a life, is legitimate if it is necessary for the salvation of many. It should, however, be done in the fairest possible way. I do realise that this changes my answer to the explorer's problem: essentially this is a decision that can not be left in the hands of an organ like the state of the judiciary, but is the situation where "in extremis" interference may take place.
In Kant's:
No, one must not kill another rational being.
Kant's laws are unconcerned with the state's verdict.
No, one must not torture another rational being.
What cause? Racial segregation? ;)
Even that, actually: being wrong does not nullify having moral intent.
Anyway, this is not a very good example. The rapist has done evil deeds and must be punished for them in order to maintain a healthy society. So would anyone for doing what he did. When talking about equality, we're talking about "inherent worth" or "inherent rights", i.e. irrespective of one's actual actions. For example, we could say that X and Y are not equal because X is smarter than Y. Pushing this further, we could have legal non-equality, i.e. that X and Y are not equal as in they have different rights and X can do A, but Y cannot (perhaps because X is smarter than Y, is of a "superior" race, is born in a certain caste or any other reason or lack thereof).
The example is appropriate, but I am not talking about decisions the state should take. One individual has chosen to do evil, acknowledged evey by himself as evil, out of self interest, while the other has indended to go good, and given of himself to do so. The first individual is worth less, even if no-one knows about the actions of either. I don't think people have any inherant and permenant rights. Their only inherant rights are too have a will, but if they choose to abandon it, then that is their choice. Frankly, and this is probably going to be a little unpopular, I don't think a newborn child has any worth besides the value (s)he has to his or her parents. Our worth is dependant upon our usefulness and our choices (and even more contraverisally, the depth of the tragodies of our lives), and our rights are a function of agreements (i.e. the social contract).
I agree, the goal of justice should be to maintain a functional society. Some elements of society are defective and we have to weed them out and deter others from imitating them, irrespective of any other factors.
No, it doesn't. First, you have to realize that this is not an argument for absolute morality but indeed an argument that people should believe morality is absolute, even though this is not the case. Second, that morality is relative does not mean most people do not largely agree on it and it does not mean that anybody has to respect the morality of other people.
Conceded.
Morality is a system that does gravitate around a target. It does tend to converge towards something. Your mistake is to believe that the target is not moving and that it is well-defined for all actions. In fact, morality tries to optimize a moving, fuzzy target. Parts of that target hardly move and are fairly clear, which results in universal and timeless agreement on some principles like "do not kill". Some parts move because society changes - now all races are equal. Some parts are fuzzy because they involve good parts and bad parts, but it's not clear how to weigh them - but thankfully, they are usually contrived.
I disagree: a moral system should be able to take imputs and produce an output. The system shouldn't change with the imputs, but the outputs will. Even if a society is largely fine with rape (and there are some primitive societies like this), rape (for fun, let us say, although I struggle to find any circumstances that would justify it) is still immoral as it is a horrific assualt on the privacy and will of an individual.
This has nothing to do with relativism. That some moral systems yield apparently superior results at large than some others in no way entails that there is one unique moral system that trumps all others. All I am saying is that rationality cannot derive the moral system because there just isn't just one. There are many of them that will offer contradictory advice in some situations, most of them contrived enough that it won't matter much. Rationality will be of no help to choose from these.
Rationallity and objectivity allows you to to select to correct moral choice, or take solace in the fact that it is too uncertain for a conclusion, and that any course (within a shallow pool obviously) would be a ligitimate choice. Relativism tells me that if my invention will save thousands of lives, and my society believes that my inviention is immoral, saving those lives is immoral. I just say that my society is wrong.
So you're saying that some "unhappiness" is required for you to be happy?
Look, seriously, if this is the way you think, what do you think the hive mind is going to give you? It will give you tragedy, it will give you morality and growth, it will give you pain and beauty. To the savage, it will give syphilis and cancer. It will shove both of you in a city populated with puppets that won't indulge you. And then in the end it will give you Truth and it will be up to you to deal with it. The point is that the hive mind won't just make you happy, it will make you happy exactly in the way that you would want to be happy. If you don't want to be happy, then you won't be, and it will be by your own fault. If you don't want the hive mind to tamper with your life in any way, it will just simulate real humans around you. In a way, all it would be doing would be to free you from your environment if you wish, hence giving you a strictly greater amount of freedom.
As for me, I'll be living in a nice house, doing what I like, writing cool programs and bestsellers, with as much money as I need and a gorgeous, incredibly smart woman by my side. At every moment, I'd have a small effort to make, but I would have support and it will always be doable. And then at the end of my life I'll get to know about the whole hive thing and I'd be grateful for everything that was given to me.
You have a good point here. However, I question whether the hive mind would not rob human existance of its Beauty: there would be nothing to discover but what the hive already knows - is that everything? I cannot concieve of a way for it to know and understand all of our humanity ... but if it could, I'd have to concede the superiority of the hive.