• Check out the relaunch of our general collection, with classic designs and new ones by our very own Pissog!

The Basis of Morality

Fuck Kant.
Seconded. So very seconded. I had that opinion even before I knew that wonderful little bit of information regarding his perceptions of rape (which naturally made my opinion of him about ten times worse). Seriously, that man is philosophy's ultimate troll.
 
actually, what Kant said, was that it would be better for a woman to die fighting off a rapist, than to live the shame of having been raped (i disagree with this).
on the other hand, he also stated that no one has the right to rape since that is an expression of dominance over another (which he highly disagreed with). not even one's spouse has the right to rape (since according to old laws, that was allowed, but that implies ownership or a person, or at least a part of them)

of course, none of this touches on rape fetishes (of which i am not referring to people who wish to rape or view faux rape, but to those that actually are aroused by the idea of being raped).
 
http://www.philosophicalmisadventures.com/?p=32

aah yes, you're right.
without sidetracking too much, rape fantasies are a whole other ballpark in that they're in fact the 'victim' claiming her/his own ground in that they've chosen to be physically and mentally dominated in such a regard - it's far more a subset of bdsm - a rape fantasy doesn't mean that the person wants to be raped; were someone to rape that person without prior consent (for an example, a boyfriend insisting on sex when no consent was given) makes it no less rape than if a Catholic schoolgirl was concerned; there's no 'rape hierarchy'.

Again though, fuck Kant.
Riddle me this, Mr. Kant: if I'm working on the railtracks and have the choice between plowing the train into a wall, and killing the 300 people on board or putting it through a tunnel and killing the 2 people in the tunnel.. OH NO WAIT YOUR PHILOSOPHY JUST FELL APART BECAUSE YOU CLAIM EVERYONE HAS ABSOLUTE WORTH. Shut up Kant, go suck a horse.
 
actually, there is a form of rape fantasy that is based on being the victim. its mostly a roleplay thing.

i dated a girl with that fetish, it was...interesting. (she actually asked me if i would punch her a few times before i hold her down. the problem was, that i don't hit women)
 
Again though, fuck Kant.
Riddle me this, Mr. Kant: if I'm working on the railtracks and have the choice between plowing the train into a wall, and killing the 300 people on board or putting it through a tunnel and killing the 2 people in the tunnel.. OH NO WAIT YOUR PHILOSOPHY JUST FELL APART BECAUSE YOU CLAIM EVERYONE HAS ABSOLUTE WORTH. Shut up Kant, go suck a horse.

So I guess you also strapped the two people inside the tunnel, which would violate Kant's principle then, wouldn't it? Kant's philosophy does not break down just because you present a false dichotomy.

RE: Ascalon.

In general: My explanation of the rational terrorist is that he is using clear cause-effect logic in calculating his next move. I ascribed no morality to it, just that his action follows rational means to a predictable conclusion based on his own premise of using fear for control. My argument is that rational and moral are two different things, just as I will explain that legal and moral are also two different things below.

Ascalon said:
I vehemently oppose this, primarily because you cannot know what the results will be. If I cook a meal using ginger, and someone who eats it dies of a rare ginger allergy which they themselves where not aware of, was my action immoral? If so, why should be even try to be moral at all, since we cannot fundimentally determine whether we are moral or not?

You make the mistake of using the exception for the rule. The vast majority of moral principles rely on broad trends rather than individual instances. You can't really claim your action was immoral unless you knew of the ginger allergy and intentionally cooked with it. Your action was missing the criticial elements of knowledge and intent.

The Catholic teaching is that sin has three elements: Gravity, knowledge, and consent. Gravity is used to determine whether a sin is mortal or venial. In order to be a mortal sin the matter must be grave, the act must be knowingly committed and completely consented to. There was a more secular rendition of this but it escapes me at this moment. There is a further caveat that ignorance of the moral law written in the heart can diminish, but not necessarily remove the gravity of the sin.

Essentially in order for an act to be immoral it must have real consequences and be committed knowingly and with malice for the object of the immoral act.

The selfless altruism is different I think. Presumably missionarys have a moral system which tells them that helping others is the ultimate good, which makes selfless altruism entirely rational. Rational does not need to mean cold hearted, self interested or even normal.

Helping others needn't be the ultimate good, but it would in general be considered a good. You can help others for entirely selfish reasons like a tax deduction. There is a certain duality between avoiding evil and doing good. Organized religion tends to place more weight on the value of doing good, although its warnings against doing evil are usually more strongly worded than the laws of secular system, leading some to believe religious people are fatalist.

When you state that the question of God's existance is irrelevant to the moral issue, we are at an equal place. However, then imply that following the commandments of the God is the ingredient for living a life of excellence. There is a huge rational gap here: why these commandments? why not others? why use commandments at all?

I mostly render the question of God's existence irrelevant in cases like this because it too often trails off into a subject divorced from actual moral discussion. If you don't believe in God, it is not practical for me to spend any time establishing him for you when I could use more earthly terminology. The commandments are essentially propositions about human relationships. The first three are between humanity and the divine, the latter seven are about man's relationship to other human beings.

The latter set is what most people on earth have as a baseline for moral teaching. My belief is the first three provide a powerful psychological mechanism for the latter seven, and reinforce them be requiring you to bring attention to a greater morality than yourself. Human beings are not by nature a creature that deals in the words "always" or "never." In effect they give you an additional strength and drive that you would not otherwise have, and when manifested positively this allows for a greater moral good.

This can be characterized by stubbornness, fanaticism, persistence, or any of multiple positively or negatively connoted adjectives, but it is difficult to argue that more forceful people do not bring greater impact. There is no book of great moderates in history, after all (or if there is, it is not a long one, and the "moderates" engaged in extreme measures for their supposedly moderate ideal). So back to your question of why commandments? Because immutable commandments taken as a whole grant inner strength and purpose, where collections of "rules" and "laws" that vary over time do not.

I feel like I'm repeating myself a little, so I'll keep this brief: If the commandments of a God are good by definition, then rape, genocide, torture, etc. are all reasonable acts if God commands them. I find this unpalatable and therefore reject good as defined by what a God commands, but if you accept this I suppose it is consitant and rational. Most people are not prepared to obey this kind of God though.

I feel your earlier arguments were addressed in the course of the previous paragraph. My argument against this one is simply that a powerful structured morality must confirm and enhance a person's inner rationality. Kant's categorical imperative is applicable here. If God commands rape, genocide, and torture, then why would he condemn it if the "sacrifices" he commanded beat you back and then start doing the same? Clearly then, this God has traits that are not desirable if the tables are turned on his heralds. He conflicts with the human rationality that you cannot always win every battle.

Now you might say this is just a heavenly iteration of The Golden Rule, but the Golden Rule has no answer to a wholly anarchic people. This often leads people to fall back to The Silver Rule, "don't do unto others what you wouldn't have done unto you," but the Silver Rule hardly compels someone to do good. The third rule is The Bronze rule, treat goodness with reciprocation and evil with justice (iirc). It is still passive like the silver rule but implies that punishment should exist for wrongdoing rather than ideal of not doing something.

In other words, moral systems are necessarily complex, but they hardly need to answer all possible outcomes. They need to offer a baseline and a purpose, and need to confirm a sense of human rationality. Otherwise you're dealing with a cult. I may expound more later, this is probably not as well formed as I would like.
 
It's a perfectly valid criticism of Kant - if your reasoning falls apart at such an obvious example you're pretty screwed. I have problems with the rest of his philosophy, too.

ferron: yeah, that's what I meant - I've dabbled in such things myself - but it doesn't mean that were a rape to occur outside the set boundaries of a rape fantasy scenario, it would be any less a rape, if that makes it any clearer.
 
DK, i'm disappointed that you didn't tear apart the "missionary=ultimate good" statement. hell, i even left that alone so that you could destroy it (because i presume you can do it better than i).
anyways, for the action of helping others to be "the ultimate good" they have to do it out of their own will, with no presumed benefits. many missionaries are not doing it for such reasons. if you ask them, then they respond with remarks of "it is god's will" and "god told us to spread his word." these are statements are little more than "i want to go to heaven."
do some of them suffer? yes, but they are doing it while being part of a religion which effectively glorifies sacrifice and physical pain for the church.
if you want an "ultimate good" then don't look at missionaries, look at the various programs where doctors CHOOSE to go out into war-torn and impoverished lands for no money, simply because they believe that every life is worth saving.

@akuchi-yeah, the wording just threw me off a bit.
 
It's a perfectly valid criticism of Kant - if your reasoning falls apart at such an obvious example you're pretty screwed. I have problems with the rest of his philosophy, too.

What is obvious about controlling the track switch where the switchman has omniscient knowledge that slowing down or stopping will cause a brutal derailing into a wall and going through a tunnel will necessarily kill two railworkers therein?

That must be one very fucked up switchman. Why would he have people working inside a tunnel when he knows a train is coming? Why would they have a tunnel with no emergency route for the workers in case of such an incident? You may have just invented a new logical fallacy: "The Idiot Switchman's Dilemma." Or "Switchwoman's" if you prefer, natch.

@ferron:

I didn't see anything particularly malevolent in Ascalon's mention of missionaries. Nowhere does he assert that being a missionary is an ultimate good, his assertion is that missionaries believe "helping others" as a general concept is their ultimate good. Which isn't entirely unfounded nor entirely true. In either case I didn't think it an egregious mockery of missionaries.
 
i never said there was malevoence in it, just that there is no proof of it being the "ultimate good" -which he doesn't say is their version of it, but rather that the moral system says that it is- in that case.
 
Different rational beings can like and dislike different things. For example, I typically do not care if I am lied to or not, because it would imply that I "trust" people. On the contrary, I believe that facts should be acquired only in ways that guarantee accurate knowledge and thus that others should be interacted with in such a way that the truth value of what they say matters as little as possible. Thus, ideally, every fact should be distrusted unless it is confirmed by at least two independent rational agents, confirmed by an agent subject to a lie detector, or very simply confirmed by a tool which can be rationally understood to be trustworthy. In a system which minimizes trust (as should any truly robust system), lying is hardly immoral and liars are nothing more than slightly defective.

Furthermore, it is perfectly possible to imagine a rational agent who can only be happy when he or she sees other rational agents suffer. A society of such agents would be impossible to sustain and no meaningful morality would be derived for them. What you mean by "rational agent" is hardly a well-defined concept. At the core, a rational agent is an agent that can act in such a way that it maximizes its own happiness, regardless of what makes him or her happy. An irrational agent would be one that acts incompetently with respect to his or her own happiness (and thus hurts himself). A society of rational agents would thus only function properly if the happiness of each was reconcilable to some extent - but even if that was the case, there is no reason to assume that they are all the same. Kant's mistake here is to assume that there exists some canonical rational agent, which is a thoroughly unsubstantiated assertion - an unwarranted generalization of his own rationality.

Secondly, basing your moral system on what rational agents would want can only make sense if society was composed of rational agents. Human society is not composed of rational agents. Far from it. Therefore, it makes no sense for morality to have anything to do with what rational agents would or wouldn't like.

On the other hand, we have solid evidence that some humans are not rational in the same way as other humans. Many humans are not rational, neither in actuality nor even in possibility. I am fairly sure that the most rational chimpanzee on the planet is more rational than the least rational human on the planet, notwithstanding mental illness. And then I've got to wonder what your criterion is to determine whether an entity is rational or not and what rationality has to do with this in the first place.

The idea is that (the rational part) a rational person desires the truth, and to lie is to deny this. We know that we are rational beings, irrespective of whatever degree this is evident. Kant's proposal is that being rational (to whatever degree we show this), we should not commit actions to others which a rational being (a theoretical one) would not want done to itself. Not infallable by any means, but it has a certain degree of valdity.

You are outnumbered, though.

What's your political theory? Just curious.

I'll post that later, I indended to answer this last, and replying to everthing else has left me quite drained. In a good way of course, since I rarely get the opertunity to have this kind of discussion.

No action "does not affect" principles. Every action would probably affect it to varying extents. A little, a lot, anything in-between. It would be simplistic to think otherwise. Hence, a morally neutral outcome can only be defined meaningfully not as one which would not affect the principles, but in fact as one which would not affect the principles more than X, where X is a predetermined threshold. It is that threshold which is problematic. Just like you can't pinpoint a moment where a small mound of sand becomes medium-sized or becomes large, you can't rationally determine a threshold between good and evil or between either and neutrality.

There's some irony here in the fact that I'm the one who is calling you out, here, for having theories that are way too simple, i.e. for unwittingly applying Occam's razor in a situation where it can't be applied. Because even though you don't realize it, that's exactly what you do. Relative morality is richer and much more complex than absolute morality. And as it stands, it is a much more accurate portrayal of what morality is and a much more powerful modelization of what it could be. To reject relative morality in favor of absolute morality is thus what I would call an undue application of Occam's razor: the former is more complex, but unlike the latter, it can represent, quantify and analyze the inherent uncertainty which exists in morality. Absolute morality is pretty lazy in comparison.

There are two relevant situations here. Firstly, when the consequences of the action are insufficiently known to give a valid answer, in which case neutral works fine. The other case is more an order of magnitude thing. Is it reasonble to steal a loaf of bread to give to a child? How about if it will feed 10 children? Or 100? Or all children? Or all children forever (thats a pretty impressive loaf of bread, but we are speaking hypthetically here)? Say after summing up the negative and positive consequences, you find that stealing to feed one is definately immoral, and stealing to feed all forever is definately moral. You are however unsure about feeding 10. You cannot pinpoint the transition between, neutral, moral and immoral, but you can say, for a given number, whether it is positive, negative, or if you are unsure, in which case the action would be neutral.

What intests me though is how relative morality could solve the same problem any better? What does it matter what society thinks about your actions?

The problem that you present is largely unspecified. The life of your comrade is arguably worth much less than humanity at large - a billion times less, if you wanted to quantify that. Even if you don't know the relevant probabilities, it is reasonable to assume that your comrade's life does not warrant the risk and thus that you should kill him anyway. You also kind of evaded my actual point, which is that you don't know how to act around the threshold. Uncertainty comes in degrees. At which degree of uncertainty, in your example, would the moral decision change? Any theory of morality must incorporate a certain dose of risk aversion - but how much? More importantly even, how do you compare the evil of killing your comrade to the evil of the Evil Empire getting their way?

Killing the comrade robs him of his life, while allowing him to betray the Rebellion results in the will of humanity being crushed forever. The suppression of the will is why I could accept killing him. If killing him would save lives, that is also legitimate, but I would have to have a high degree of certainty that a large number of lives would be lost.

Here's another hairy moral problem: would you be willing to kill every single criminal on the planet, right now, in one fell swoop? Notwithstanding the possibility that non-criminals eventually take their places, you would arguably make the world better at the cost of eliminating a large chunk of it. Or is eliminating these people an evil in itself? What would you do with a mass murderer who also happens to be the only human being capable of finding a cure for cancer (and willing to)? Would it be acceptable to torture the smartest human who ever existed in order to force him or her to find a cure for all diseases, should he or she refuse to cooperate willingly and should there be sufficient evidence that nobody else could do it?

I don't think there is any conclusive answer to these dilemmas and I don't think there could be either. There exist very rational arguments that go either way.

Their might not be a common answer, but conclusive answers can be reached from a rational system.

In mine, for instance:
Yes, provided that the severity of their crimes matched certain qualifications, but I don't think you are talking about pick-pockets either. They have freely elected to do what they know is evil, are the direct cause of evil, and their elimination would remove much evil. However, I would not do so more than once, as this would realistically take away the abilitiy to choose to be immoral, and hence the ability to be moral.
Offer a compromise: there are not enough crimials able to achieve such feats to lessen the deterrant.
Yes. Taking something by force, even a life, is legitimate if it is necessary for the salvation of many. It should, however, be done in the fairest possible way. I do realise that this changes my answer to the explorer's problem: essentially this is a decision that can not be left in the hands of an organ like the state of the judiciary, but is the situation where "in extremis" interference may take place.

In Kant's:
No, one must not kill another rational being.
Kant's laws are unconcerned with the state's verdict.
No, one must not torture another rational being.

What cause? Racial segregation? ;)

Even that, actually: being wrong does not nullify having moral intent.

Anyway, this is not a very good example. The rapist has done evil deeds and must be punished for them in order to maintain a healthy society. So would anyone for doing what he did. When talking about equality, we're talking about "inherent worth" or "inherent rights", i.e. irrespective of one's actual actions. For example, we could say that X and Y are not equal because X is smarter than Y. Pushing this further, we could have legal non-equality, i.e. that X and Y are not equal as in they have different rights and X can do A, but Y cannot (perhaps because X is smarter than Y, is of a "superior" race, is born in a certain caste or any other reason or lack thereof).

The example is appropriate, but I am not talking about decisions the state should take. One individual has chosen to do evil, acknowledged evey by himself as evil, out of self interest, while the other has indended to go good, and given of himself to do so. The first individual is worth less, even if no-one knows about the actions of either. I don't think people have any inherant and permenant rights. Their only inherant rights are too have a will, but if they choose to abandon it, then that is their choice. Frankly, and this is probably going to be a little unpopular, I don't think a newborn child has any worth besides the value (s)he has to his or her parents. Our worth is dependant upon our usefulness and our choices (and even more contraverisally, the depth of the tragodies of our lives), and our rights are a function of agreements (i.e. the social contract).

I agree, the goal of justice should be to maintain a functional society. Some elements of society are defective and we have to weed them out and deter others from imitating them, irrespective of any other factors.

No, it doesn't. First, you have to realize that this is not an argument for absolute morality but indeed an argument that people should believe morality is absolute, even though this is not the case. Second, that morality is relative does not mean most people do not largely agree on it and it does not mean that anybody has to respect the morality of other people.

Conceded.

Morality is a system that does gravitate around a target. It does tend to converge towards something. Your mistake is to believe that the target is not moving and that it is well-defined for all actions. In fact, morality tries to optimize a moving, fuzzy target. Parts of that target hardly move and are fairly clear, which results in universal and timeless agreement on some principles like "do not kill". Some parts move because society changes - now all races are equal. Some parts are fuzzy because they involve good parts and bad parts, but it's not clear how to weigh them - but thankfully, they are usually contrived.

I disagree: a moral system should be able to take imputs and produce an output. The system shouldn't change with the imputs, but the outputs will. Even if a society is largely fine with rape (and there are some primitive societies like this), rape (for fun, let us say, although I struggle to find any circumstances that would justify it) is still immoral as it is a horrific assualt on the privacy and will of an individual.

This has nothing to do with relativism. That some moral systems yield apparently superior results at large than some others in no way entails that there is one unique moral system that trumps all others. All I am saying is that rationality cannot derive the moral system because there just isn't just one. There are many of them that will offer contradictory advice in some situations, most of them contrived enough that it won't matter much. Rationality will be of no help to choose from these.

Rationallity and objectivity allows you to to select to correct moral choice, or take solace in the fact that it is too uncertain for a conclusion, and that any course (within a shallow pool obviously) would be a ligitimate choice. Relativism tells me that if my invention will save thousands of lives, and my society believes that my inviention is immoral, saving those lives is immoral. I just say that my society is wrong.

So you're saying that some "unhappiness" is required for you to be happy?

Look, seriously, if this is the way you think, what do you think the hive mind is going to give you? It will give you tragedy, it will give you morality and growth, it will give you pain and beauty. To the savage, it will give syphilis and cancer. It will shove both of you in a city populated with puppets that won't indulge you. And then in the end it will give you Truth and it will be up to you to deal with it. The point is that the hive mind won't just make you happy, it will make you happy exactly in the way that you would want to be happy. If you don't want to be happy, then you won't be, and it will be by your own fault. If you don't want the hive mind to tamper with your life in any way, it will just simulate real humans around you. In a way, all it would be doing would be to free you from your environment if you wish, hence giving you a strictly greater amount of freedom.

As for me, I'll be living in a nice house, doing what I like, writing cool programs and bestsellers, with as much money as I need and a gorgeous, incredibly smart woman by my side. At every moment, I'd have a small effort to make, but I would have support and it will always be doable. And then at the end of my life I'll get to know about the whole hive thing and I'd be grateful for everything that was given to me.

You have a good point here. However, I question whether the hive mind would not rob human existance of its Beauty: there would be nothing to discover but what the hive already knows - is that everything? I cannot concieve of a way for it to know and understand all of our humanity ... but if it could, I'd have to concede the superiority of the hive.

RE: Ascalon.

In general: My explanation of the rational terrorist is that he is using clear cause-effect logic in calculating his next move. I ascribed no morality to it, just that his action follows rational means to a predictable conclusion based on his own premise of using fear for control. My argument is that rational and moral are two different things, just as I will explain that legal and moral are also two different things below.

If the terrorist truly believes that his actions are rationally based on achieving his ultimate good, then he is moral, while his actions are not, as he is wrong. My arguement is that while a moral action may come about irrationally, morality must be rational.

You make the mistake of using the exception for the rule. The vast majority of moral principles rely on broad trends rather than individual instances. You can't really claim your action was immoral unless you knew of the ginger allergy and intentionally cooked with it. Your action was missing the criticial elements of knowledge and intent.

The Catholic teaching is that sin has three elements: Gravity, knowledge, and consent. Gravity is used to determine whether a sin is mortal or venial. In order to be a mortal sin the matter must be grave, the act must be knowingly committed and completely consented to. There was a more secular rendition of this but it escapes me at this moment. There is a further caveat that ignorance of the moral law written in the heart can diminish, but not necessarily remove the gravity of the sin.

How do we calculate these elements? Some sort of multiplication? And what does gravity mean exactly? Are we talking about the gravity of the consequences or the extent of the malice behind the act? I assume that knowledge means knowledge that the act is a sin, as othewise it would be adequately covered by consent, but if you do not act out of malice, have you then not committed a sin? I'm a little lost, so if you could please tell me whether the following would be sins, and what their magnitude would be:

A: I trip while holding a kitchen knife and accidentally kill my wife.
B: I am blackmailed into killing my wife, and do not wish to hurt her.
C: I kill my wife because she asked for it and I do not consider this wrong.
D: I despise my wife and try to kill her, but fail to.
E: I am blackmailed into killing my wife, but fail to.

From the paragraph before, I understand that A is not a sin, but I am unsure about what the others may be by the standards you mentioned.

Essentially in order for an act to be immoral it must have real consequences and be committed knowingly and with malice for the object of the immoral act.

What about an act which was intended to cause harm but fails to due to incompetance etc? Is that not also immoral?

[snip]

I mostly render the question of God's existence irrelevant in cases like this because it too often trails off into a subject divorced from actual moral discussion. If you don't believe in God, it is not practical for me to spend any time establishing him for you when I could use more earthly terminology. The commandments are essentially propositions about human relationships. The first three are between humanity and the divine, the latter seven are about man's relationship to other human beings.

I understand your point, but if there is a moral discussion, you either make you points based on pure rationality, or you base them on the asssumtions that your moral baseline is valid because it is God's. If it is the latter, then an oppenent has every right to say that your acceptance of rationality allows rationality to be used to discuss morality, but unless you can prove the validity of God's commandments in a moral discussion, they cannot be assumed.

The latter set is what most people on earth have as a baseline for moral teaching. My belief is the first three provide a powerful psychological mechanism for the latter seven, and reinforce them be requiring you to bring attention to a greater morality than yourself. Human beings are not by nature a creature that deals in the words "always" or "never." In effect they give you an additional strength and drive that you would not otherwise have, and when manifested positively this allows for a greater moral good.

I cannot concede that the lack of immutable commandments prevents you from having very strong morality and drive. In fact, dare I bring an even less popular, and horribly misinterpreted, philosopher into the discussion, and suggest that a significant basis of Nietzsche's philosphy was an answer to the falacy that no divine or supreme commandment spells hopeless nihilism. A strong personally created morality can be just as powerful as, and very possibly more powerful than, a morality based on some trancendant external concept (e.g. God, patriotism, communism, etc.). I'm not saying this cannot be objective morality, because it can, and usually is, but it was my morality that caused my to alter my religious beliefs, not the other way round, and this morality which motivates me to create my fair share of unrest.

This can be characterized by stubbornness, fanaticism, persistence, or any of multiple positively or negatively connoted adjectives, but it is difficult to argue that more forceful people do not bring greater impact. There is no book of great moderates in history, after all (or if there is, it is not a long one, and the "moderates" engaged in extreme measures for their supposedly moderate ideal). So back to your question of why commandments? Because immutable commandments taken as a whole grant inner strength and purpose, where collections of "rules" and "laws" that vary over time do not.

I largely agree with you here, but the "rules" and "laws" are invarient: the moral outcome may change according to the situation, but the standards used to judge it are not: this is the foundation of objectivism.

I feel your earlier arguments were addressed in the course of the previous paragraph. My argument against this one is simply that a powerful structured morality must confirm and enhance a person's inner rationality. Kant's categorical imperative is applicable here. If God commands rape, genocide, and torture, then why would he condemn it if the "sacrifices" he commanded beat you back and then start doing the same? Clearly then, this God has traits that are not desirable if the tables are turned on his heralds. He conflicts with the human rationality that you cannot always win every battle.

I don't quite understand what you are saying here. For it to be moral to obey a command, the command must either be moral (by whatever rational standards you measure this with), or the command must be moral as and of itself, i.e. the deed commanded is moral because it was commanded by the Authority. Where do you stand here? If God commanded the above actions actions, would you consider that these actions have now become moral, or would you say the God has now become immoral? When you say this kind of God comflicts with human rationality, are you saying that this kind of God is not moral, or that we should concede an irrational God? If it is the latter, how can you justify rationally pursuing the irrational ends of said God? I do not consider "humans are flawed, God isn't, and therefore your point is irrelevant" a valid argument, but I don't think that's what you're saying.

Now you might say this is just a heavenly iteration of The Golden Rule, but the Golden Rule has no answer to a wholly anarchic people. This often leads people to fall back to The Silver Rule, "don't do unto others what you wouldn't have done unto you," but the Silver Rule hardly compels someone to do good. The third rule is The Bronze rule, treat goodness with reciprocation and evil with justice (iirc). It is still passive like the silver rule but implies that punishment should exist for wrongdoing rather than ideal of not doing something.

In other words, moral systems are necessarily complex, but they hardly need to answer all possible outcomes. They need to offer a baseline and a purpose, and need to confirm a sense of human rationality. Otherwise you're dealing with a cult. I may expound more later, this is probably not as well formed as I would like.

The trouble with having a baseline and a sense of human rationality is that you are accepting the validity of rationality as a tool to establish morality, and then do not subject the unproven baseline to rational scrutiny. The metallic rules you mention do not require the existance of a God to function, in which case why would you pay any heed to that God's commands? In fact, you stated earlier that many parts of the moral code were against rationality (although this may have been with a less thorough definition of rationality).

The baseline and purpose are also useless unless they can translate situations into moral conclusions. From what I have obseved, you justify your positions rationally, implying that you believe that rationality is to be used for this translation.

DK, i'm disappointed that you didn't tear apart the "missionary=ultimate good" statement. hell, i even left that alone so that you could destroy it (because i presume you can do it better than i).
anyways, for the action of helping others to be "the ultimate good" they have to do it out of their own will, with no presumed benefits. many missionaries are not doing it for such reasons. if you ask them, then they respond with remarks of "it is god's will" and "god told us to spread his word." these are statements are little more than "i want to go to heaven."
do some of them suffer? yes, but they are doing it while being part of a religion which effectively glorifies sacrifice and physical pain for the church.
if you want an "ultimate good" then don't look at missionaries, look at the various programs where doctors CHOOSE to go out into war-torn and impoverished lands for no money, simply because they believe that every life is worth saving.

My point still stands: they are acting rationally. I completed this response further on.

@akuchi-yeah, the wording just threw me off a bit.

What is obvious about controlling the track switch where the switchman has omniscient knowledge that slowing down or stopping will cause a brutal derailing into a wall and going through a tunnel will necessarily kill two railworkers therein?

That must be one very fucked up switchman. Why would he have people working inside a tunnel when he knows a train is coming? Why would they have a tunnel with no emergency route for the workers in case of such an incident? You may have just invented a new logical fallacy: "The Idiot Switchman's Dilemma." Or "Switchwoman's" if you prefer, natch.

Hahahaha. I couldn't help myself. I know its probably not what you meant, but it's still hillarious.

@ferron:

I didn't see anything particularly malevolent in Ascalon's mention of missionaries. Nowhere does he assert that being a missionary is an ultimate good, his assertion is that missionaries believe "helping others" as a general concept is their ultimate good. Which isn't entirely unfounded nor entirely true. In either case I didn't think it an egregious mockery of missionaries.

Either way, whether the missionary is motivated out of the belief that helping others is an ultimate good, or whether it is God's command, (s)he is still acting rationally, which was my point.

Edit: A last minute somewhat irrelevant addition about Kant's stance on rape: He states two things, neither of them being that a woman should resist rape to the death.
Firstly, he states that it is better do die than to participate in an immoral act. This is certaintly a defensible position. At no point does he state that being raped is a disgraceful act: "surrendering" implies an act. This may however have been tainted by the unfortunate social beliefs at the time and Kant's (admirable yet often perverse) insistance in embracing every ramification of his philosophies.
Secondly, he states that resisting to the death is better than suicide out of shame afterwards. This makes logical, let alone moral, sense.

This has been fascinating so far, so thanks to everyone who posted here.

Yours appreciatively,

Ascalon
 
The idea is that (the rational part) a rational person desires the truth, and to lie is to deny this. We know that we are rational beings, irrespective of whatever degree this is evident. Kant's proposal is that being rational (to whatever degree we show this), we should not commit actions to others which a rational being (a theoretical one) would not want done to itself. Not infallable by any means, but it has a certain degree of valdity.

I disagree. A rational person is interested in satisfying certain personal objectives (such as being happy). As such, he is only interested in truth to the extent that it is useful in order to meet these objectives and he certainly does not care about how that information is obtained. A rational person would not necessarily hold it against another rational person to lie to them: if an assessment of the situation of X makes it clear that there are rational reasons for him to lie to me, it would be rational, not immoral, for X to lie to me. Hence, I would simply not trust what X tells me. Many pieces of information are irrelevant and hence one can lie all they want about them. If I need relevant information, there are usually trustable ways to obtain it, including giving X good reasons to tell me the truth. In a world of perfectly rational people, I would say that it makes little sense for lying to be immoral.

There are two relevant situations here. Firstly, when the consequences of the action are insufficiently known to give a valid answer, in which case neutral works fine.

It is not clear how to meaningfully determine the minimal subset of known consequences one needs to be able to give a valid answer.

The other case is more an order of magnitude thing. Is it reasonble to steal a loaf of bread to give to a child? How about if it will feed 10 children? Or 100? Or all children? Or all children forever (thats a pretty impressive loaf of bread, but we are speaking hypthetically here)? Say after summing up the negative and positive consequences, you find that stealing to feed one is definately immoral, and stealing to feed all forever is definately moral. You are however unsure about feeding 10. You cannot pinpoint the transition between, neutral, moral and immoral, but you can say, for a given number, whether it is positive, negative, or if you are unsure, in which case the action would be neutral.

Assuming you can even determine if a consequence is positive, negative or neutral (not trivial), how do you weigh them? Because that's the problem. If you have consequences X, Y and Z valued at 6, 5 and -12, the sum is -1 which is negative. If they are valued at 7, 6 and -12, the sum is 1 which is positive. If a person dies, I assume that's negative... but how negative? If a child gets fed, it's positive, but how positive? What is the difference or ratio between the absolute values of the two aforementioned values? Very small changes in the valuation of these events can lead to wildly different results in corner situations. And that's just the tip of the iceberg: assuming that you can determine whether an action is positive or negative, if you have five actions valuing at -10, -10, -10, -10 and 50, the sum is +10, so overall it's "good". If you have five actions valuing at 2, 2, 2, 2, 1, the sum is +9. That is still good, but according to the simple mathematical model it is a little worse. However, the second policy manages to always be positive, which is arguably better than four negatives and one slightly larger positive. But if the last positive was +1000 then the first policy would probably still be better. Where do you place the line? Can you even place one? I say no.

What intests me though is how relative morality could solve the same problem any better? What does it matter what society thinks about your actions?

It's more that I view morality as a societal phenomenon that can be studied scientifically. I also do not see any fundamental "problems" that should be solved. I don't think there's some magical "ought" that we can find (whether it be rationally or through religion) and that we should follow. I see humanity as a large collection of black boxes that behave and "feel things" in a somewhat common fashion, like a flock of birds. Morality is the word we use to refer to a set of implicit rules which that collection of entities has evolved in order to regulate itself as a group. I see these rules as obviously dependent on the inner nature of these entities, which can change through time, and I understand that the various differences between individuals and subgroups within society further prompt for local variations in these rules and account for some inherent fuzziness in the valuation of actions as moral or immoral. To me "absolute morality" fails to account for these differences. To be more precise, absolute morality requires every single person to have a similar vision of what the perfect society would be, which simply is not the case. I also think it is impossible to rationally derive any concept of perfect society.

Killing the comrade robs him of his life, while allowing him to betray the Rebellion results in the will of humanity being crushed forever. The suppression of the will is why I could accept killing him. If killing him would save lives, that is also legitimate, but I would have to have a high degree of certainty that a large number of lives would be lost.

How large?

Their might not be a common answer, but conclusive answers can be reached from a rational system.

In mine, for instance:
Yes, provided that the severity of their crimes matched certain qualifications, but I don't think you are talking about pick-pockets either. They have freely elected to do what they know is evil, are the direct cause of evil, and their elimination would remove much evil. However, I would not do so more than once, as this would realistically take away the abilitiy to choose to be immoral, and hence the ability to be moral.

I fail to see what is wrong in removing the ability to choose to be immoral.

Offer a compromise: there are not enough crimials able to achieve such feats to lessen the deterrant.
Yes. Taking something by force, even a life, is legitimate if it is necessary for the salvation of many. It should, however, be done in the fairest possible way. I do realise that this changes my answer to the explorer's problem: essentially this is a decision that can not be left in the hands of an organ like the state of the judiciary, but is the situation where "in extremis" interference may take place.

In Kant's:
No, one must not kill another rational being.
Kant's laws are unconcerned with the state's verdict.
No, one must not torture another rational being.

You just gave me two rational systems (and I'm generous in calling them rational - it is extremely easy for a moral system to be inconsistent along moral grays). If there's an absolute morality then there should be just one.

The example is appropriate, but I am not talking about decisions the state should take. One individual has chosen to do evil, acknowledged evey by himself as evil, out of self interest, while the other has indended to go good, and given of himself to do so. The first individual is worth less, even if no-one knows about the actions of either. I don't think people have any inherant and permenant rights. Their only inherant rights are too have a will, but if they choose to abandon it, then that is their choice.

So a person who wants to do evil, but is afraid to do so and lives a honest life out of pure self-preservation, would be worth less than a person who wants to do good, but does evil out of sheer incompetence? I'm really not sure I understand what you're saying :(

Frankly, and this is probably going to be a little unpopular, I don't think a newborn child has any worth besides the value (s)he has to his or her parents. Our worth is dependant upon our usefulness and our choices (and even more contraverisally, the depth of the tragodies of our lives), and our rights are a function of agreements (i.e. the social contract).

I tend to agree that "worth" should mostly depend on usefulness. An embryo certainly has no inherent worth that would make abortion immoral. This said, a newborn is the result of a great amount of work, so if only as property it's actually worth a great deal.

I disagree: a moral system should be able to take imputs and produce an output. The system shouldn't change with the imputs, but the outputs will. Even if a society is largely fine with rape (and there are some primitive societies like this), rape (for fun, let us say, although I struggle to find any circumstances that would justify it) is still immoral as it is a horrific assualt on the privacy and will of an individual.

But really, the answer to this would be that rape used to be moral, now it isn't, and it became immoral retroactively. You have the right to call these societies wrong, immoral, backwards and barbarian. It's your prerogative and I certainly won't disagree with you on that. But you have to keep in mind that in a few thousand years, future you might use the same words to qualify current society.

Rationallity and objectivity allows you to to select to correct moral choice, or take solace in the fact that it is too uncertain for a conclusion, and that any course (within a shallow pool obviously) would be a ligitimate choice. Relativism tells me that if my invention will save thousands of lives, and my society believes that my inviention is immoral, saving those lives is immoral. I just say that my society is wrong.

You are free to think that. You are free to reject the morality of society. And perhaps society will agree with you in the future. "Moral" and "immoral" are just words. There are no magical "moral" and "immoral" labels you can stick on actions. All actions have consequences which do not depend on your moral system. You can valuate them in any way you'd like. The use of one, or another of these valuations by society at large would produce various types of societies and I can hardly see an objective criterion that would allow us to determine which of them would "work better". What matters? Happiness? Freedom? Technological progress? What people want? What people think they want? Some arbitrary combination of all of these things? Who matters? All humans? All living beings? All rational beings? A well-defined subset of humans? There are many things that one might like to prioritize and each of them will lead to a different concept of morality. At best, you can try to convince others that your vision is better. But in fact, any criterion you'd find would be somewhat self-referential, since you are part of society and you have your own biases as to what you think it should be.

Whatever rules regulate how humans "should" behave in a "healthy" society we call "morality". Most of it is obvious for the great majority of humans, but in the end the truth is that there's no real, formal definition of what a healthy society is in the first place and there's a lot of leeway, so any kind of morality is, by necessity, ad hoc, a loosely constructed set of rules that work no worse in practice than any "rationally constructed" morality because there's no stable criterion to derive them from.

You have a good point here. However, I question whether the hive mind would not rob human existance of its Beauty: there would be nothing to discover but what the hive already knows - is that everything? I cannot concieve of a way for it to know and understand all of our humanity ... but if it could, I'd have to concede the superiority of the hive.

It is pretty clear to me that eventually computers will surpass our cognitive abilities. When that happens, as far as discovering things go, we'll be pretty useless anyway. Might as well give up and place ourselves in a nice little natural reserve here on Earth while the hive mind expands through the galaxy.
 
I think that the discussion has boiled down to a number of key issues here, so I will try to structure my post to deal with each one in turn.

Firstly, what morality is:
I disagree. A rational person is interested in satisfying certain personal objectives (such as being happy). As such, he is only interested in truth to the extent that it is useful in order to meet these objectives and he certainly does not care about how that information is obtained. A rational person would not necessarily hold it against another rational person to lie to them: if an assessment of the situation of X makes it clear that there are rational reasons for him to lie to me, it would be rational, not immoral, for X to lie to me. Hence, I would simply not trust what X tells me. Many pieces of information are irrelevant and hence one can lie all they want about them. If I need relevant information, there are usually trustable ways to obtain it, including giving X good reasons to tell me the truth. In a world of perfectly rational people, I would say that it makes little sense for lying to be immoral.

It's more that I view morality as a societal phenomenon that can be studied scientifically. I also do not see any fundamental "problems" that should be solved. I don't think there's some magical "ought" that we can find (whether it be rationally or through religion) and that we should follow. I see humanity as a large collection of black boxes that behave and "feel things" in a somewhat common fashion, like a flock of birds. Morality is the word we use to refer to a set of implicit rules which that collection of entities has evolved in order to regulate itself as a group. I see these rules as obviously dependent on the inner nature of these entities, which can change through time, and I understand that the various differences between individuals and subgroups within society further prompt for local variations in these rules and account for some inherent fuzziness in the valuation of actions as moral or immoral. To me "absolute morality" fails to account for these differences. To be more precise, absolute morality requires every single person to have a similar vision of what the perfect society would be, which simply is not the case. I also think it is impossible to rationally derive any concept of perfect society.

You are free to think that. You are free to reject the morality of society. And perhaps society will agree with you in the future. "Moral" and "immoral" are just words. There are no magical "moral" and "immoral" labels you can stick on actions. All actions have consequences which do not depend on your moral system. You can valuate them in any way you'd like. The use of one, or another of these valuations by society at large would produce various types of societies and I can hardly see an objective criterion that would allow us to determine which of them would "work better". What matters? Happiness? Freedom? Technological progress? What people want? What people think they want? Some arbitrary combination of all of these things? Who matters? All humans? All living beings? All rational beings? A well-defined subset of humans? There are many things that one might like to prioritize and each of them will lead to a different concept of morality. At best, you can try to convince others that your vision is better. But in fact, any criterion you'd find would be somewhat self-referential, since you are part of society and you have your own biases as to what you think it should be.

Whatever rules regulate how humans "should" behave in a "healthy" society we call "morality". Most of it is obvious for the great majority of humans, but in the end the truth is that there's no real, formal definition of what a healthy society is in the first place and there's a lot of leeway, so any kind of morality is, by necessity, ad hoc, a loosely constructed set of rules that work no worse in practice than any "rationally constructed" morality because there's no stable criterion to derive them from.

I don't think we can get away with calling morality a societel phenomenon. Once we have established that we are able to exert an influence on our world, then the question "how should we exert that influence?", or rather "how should I exert my influence?" arrises. I consider morality to be any attempt to answer that question.

Of course, in order to answer that question, one must either decide what the goal of exerting said influence is, or otherwise attempt to define a behavioural code as being more important that the goal. As you rightly point out, rational deduction of this goal is rather difficult. However, this does clash horribly with the basis of relative morality: the idea that a practice acceptable to a group is moral for that group, and that it gets its moral legitimacy for that group because of its acceptance by that group is nonsensical in the context of morality described as above. The difficulty of establishing the legitimacy of a particular system of objective morality does not present a valid excuse for the employment of relative morality. Even if the difficulty inherant in determining these objectives means that one cannot necesarily categorically prove that one moral system is superior to another, you cannot evade the fact that there is a single solution, or no solution (which is also a solution, as it tells us we may do as we please), and that it is possible to promote one system over another using reason and an examaination of the ramifications of each system.

I am aware that this is not what you are suggesting: indeed, you have defined morality as "a set of implicit rules which that collection of entities has evolved in order to regulate itself as a group". As a scientific phenomenon in the study of the function of groups, this makes sence, but as the answer to the question of morality, it does not reach the heat of the matter. The question of morality is individual: "what should I do?" as opposed to "under what terms would society produced the best results?". I believe that I have succeeding in establishing that if a solution is reached (or approached, which may be more realistic), it must be reached rationally. In the construction of a state or society, the question of the object of said state or society is unavoidable, and once an answer, imperfect as it may be, is rationally decided upon, all laws and less official rules must refer to it rationally.

The idea that the rules and laws by which a society functions is no worse than a system based on rationality is therefore not valid: we can justify (although admittedly not prove) the validity of a moral system (I will justify mine in the next issue as an example).

There is another proposition of a moral system which no-one has explicitly referenced, but we are scraping close enough to it to feel that a brief examination is in order. I can't remember the name exactly, but it is the belief that morality is relative not to a group, but to morality itself. In essence, you have a number of supositions (e.g. killing is wrong, happiness is good, being polite is good, etc.) and another suposition is correct or incorrect depending on how many of your starting suppositions it fulfils or conflicts with.

I can see what inspired this idea, and how it pertains to the discussion. Due to the rapid expansion of transport and communication networks, there are now numberous groups with different moralities functioning in the same societies. Indeed, in some ways the world is so interconnected that the moralities of a group in one place have a major influence on the existance of those in entirely another. The idea above is perhaps an attempt to find some sort of global morality based on the proposition that you cannot show one morality to be superior to another. This falls flat on three grounds. Firstly, it assumes that all possible viable moral supositions have been assumed, which is clearly nonsence. Secondly, it holds that all moral systems are equal, which I have dealt with earlier. Finally, it does not make even the slightest attempt to justify the end product as a moral system.

The other major issue I see is how to determine the moral system:
But really, the answer to this would be that rape used to be moral, now it isn't, and it became immoral retroactively. You have the right to call these societies wrong, immoral, backwards and barbarian. It's your prerogative and I certainly won't disagree with you on that. But you have to keep in mind that in a few thousand years, future you might use the same words to qualify current society.

I fail to see what is wrong in removing the ability to choose to be immoral.

You just gave me two rational systems (and I'm generous in calling them rational - it is extremely easy for a moral system to be inconsistent along moral grays). If there's an absolute morality then there should be just one.

So a person who wants to do evil, but is afraid to do so and lives a honest life out of pure self-preservation, would be worth less than a person who wants to do good, but does evil out of sheer incompetence? I'm really not sure I understand what you're saying :(

I tend to agree that "worth" should mostly depend on usefulness. An embryo certainly has no inherent worth that would make abortion immoral. This said, a newborn is the result of a great amount of work, so if only as property it's actually worth a great deal.

It is pretty clear to me that eventually computers will surpass our cognitive abilities. When that happens, as far as discovering things go, we'll be pretty useless anyway. Might as well give up and place ourselves in a nice little natural reserve here on Earth while the hive mind expands through the galaxy.

As I alluded to earlier, I think the best response here is to justify a rational moral system. I do not agree that our existance as rational being justifies answering the moral question by saying that we should behave towards others according to how a rational being would want to be treated, nor do I believe that increasing the degree of happiness in the world is a moral imperative. Rather, the pursuit of life itself appears to me to be the best answer - the only positive direction an anwer to the question "how should I exert what power I have over the world" is the growth of the Will and the increase in Power. In short, the "Will to Power" is the essence of life itself. Both the Will and Power are increased by the exercise of Power in accordance to the Will.

However, in order for my Power to grow, all things I can influence must increase in Power themselves - the supression of Power in order to gain Power is a self-defeating exercise. I can now establish what my priorities must be:

1. Maintain and strengthen a Will to Power over my suroundings.
2. Prevent the Will of others from being supressed and facilitate its development.
3. Increase my personal Power over my surroundings.
4. Prevent the removal of and facilitate the increase in the Power of others.

As such, there are no "rules" which must be followed. The closest I come to one is my condenmation of rape, which is a direct violation of one of the most personal, innermost Power. Furthermore, the severe psycological effects which often result from rape may cause a diminishing of the Will itself. I have difficulty imagining any situation which would make rape a moral option (except possibly the prevention of multiple rapes).

However, from this system it is possible to establish answers to almost all moral questions. When an outcome is not immediately obvious (i.e. the consequences of the action are positives and negatives falling under different priority numbers) first principles can be used: maintaining order by crushing a terrorist cell would anihilate their wills, but it would lessen the fear of the populace [or prevent fear from compromising them], resulting in a net increase in the Will to Power across the entire population).

I cannot assign absolute values to certain acts, but nor can we assign absolute values to anything: we measure mass relative to a lump in a safe we call one kilogram. The increases and decreases in the Will to Power resulting from an action be be compared against each other, and the outcome (or the most probable outcome, or the expected value [mathematically speaking], depending on whatever is most sensible in each situation) assessed. It is this predicted outcome which is used to determine the morality of an action.

This may seem a little cold, but it makes sense, to me at least. In fact, many results usually considered positive, such as curing diseases, relieving poverty, and scientific progress are considered positive via this system as well, since the increase Power.

(Please don't call the police :p)

In case you are still intested, I did promise to describe my ideal political system. Naturally, it is closely based on my personal philosophy.

The goal of the system is:
The further increase in the Power of the state as a whole, derived from the power of each individual, through the pursuit of knowledge, the development of technology, the indulgence of the arts, and most importantly the ability of each individual to develop him or herself however (s)he sees fit.

Long winded and vague perhaps, but it should become clearer as the description progresses.

For the system to work, there are certain conditions which must be met. Until that time, the ideals of this system can only be approached. Thankfully, these conditions have finally become achievable. They will probably require extensive implementation and optimisation research, but we now have the technology to make them more than a pipe dream. They also require certain specifics from the state, which is where the policies come in.

The most important element of my economic policy is the creation of an abundance of low price high quality necessities. This would be met by having a number of industries from most major agricultural and production sectors under the direct control of the state. Employment in these sectors would be minimal: advanced technology, robotics and self-maintaining facilites would be the key. The only staff needed in, say, the green bean industry would be a very small number of maintainance crew and a similar number of scientists and economists working towards even more efficiency and analysing consuption and planning to ensure that there is always a slight excess of green beans on the market. Work conditions would include four to six hour days and pay starting at a fairly high baseline, with bonuses given according to performance (so as to avoid the pointless clock-watching which goes on in most burocratic systems).

The state industries should provide all the products needed to have a good quality of life, but should not monopolise any particular industry. The result of this policy should be that private companies take similar measures. I see no other way of lowering the hours people must spend at work but creating competition from the state. The policy of efficiency optimisation, created by state competition, will result in a great diversification of the market (since there is already an excess of the basic product).

Already, modern societies are heading down this route: developed coutries have more and more of their population working in tertiary industries. The advancements detailed above would not only make those who must work in primary and secondary industries minimal, but it would dramatically increase the amount of free time people have. This is essential if tertiary industries are to flourish.

The education system will also be radically overhauled. Due to the vast increase in production, the state can afford to sponsor its citizens for more extensive educations. Not only will universities be free (and really free, not nominally free as some are now), but the best and brightest of all fields will be attracted to universities and schools due to dramatically higher saleries and job benefits. I find it outragous that perhaps the most valuble of all professions is considered so poorly by society at present. Furthermore, the way in which schools operate will be different: sujects will be divided up into seperate section and children will have more choice as to which they wish to pursue (although some basic English and Mathematics courses will remain compulsory for a while). Investigation and exploration, rather than the memorisation of information will be prioritised. It is my belief that the worst punishment for a pupil to recieve should be being barred from attending school the next day.

Furthermore, far more resources will be devoted to supporting those who choose to go into research. Naturally, those who actually succeed in making valuable discoveries would be rewarded more, but research should be a supported endeavour.

The power structure will not be democratic (this is where people tend to demonise the whole idea and invoke Godwin's law). This is based on the rather obvious fact that the portion of people who care about the society as a whole, know enough about the issue at hand, and are intelligent enough to make a good decision on any one issue is very low - most decisions are best left in the hand of the most qualified. Furthermore, most politicians are only concerned about being re-elected, with such short terms and dependance on the public mood they are often prevented from taking the best course of action. An number of exceptional individuals from a relevant profession would be selected by the state to form a small council (say five people) for each issue (e.g. agriculture, sanitation, architecture). All memebers of this council may be present and speak on any issue raised in a greater senate comprised of these councils, but only the lead member (decided on or voted for by the council itself) may vote when voting is called for.

In order to be admitted to the executive senate (the floor of politicians in the senate but not in one of the councils), an individual must pass a horrifically tough exam meant to test their knowledge of the issues and their ability to analyse and assess data logically. Those that pass are cross-examined my the senate to test their ability to engage with it, and only the top 300 applicants are given seats. This process is repeaded every four years, with the exeption of the leader. The selection of the leader is made once every ten years and there is no term limit. Each member of the floor may choose to stand for selection, but it is likely that many will chose to make alliances in a similar but less rigid manner to today's party system. During the candidacy period, each candidate engages the senate and establishes his or her position with the primary issues. The senate eventually votes for a leader, who may pick 14 people to sit with him in the ruling council and advise him. Up to four of these people may be from outside of the floor of politicians.

I think that covers the major issues. I hope it wasn't too boring.

Regards,

Ascalon.
 
you go wrong very early on, in your first principles

First principles:
I doubt that I think, therefore I think, therefore I am, and I think.

nope, you think, therefore you think. you think, therefore thought exists. you haven't provided any reason to equate thought with the self, or thought with individual existence. of course, descartes made this mistake too and he is a bona fide genius so you shouldn't feel too bad.

I think, therefore there must be a change in my thinking-organ, therefore there must be change.
There is change, therefore there must be space-time, which is necessary for there to be a change.

kind of want to bring up kant and his space-time goggles, because they tear this to shreds, but i don't think it's an essential part of your argument anyway. you don't need to prove space-time exists outside of the human mind for the rest of your argument to follow, the fact that we perceive space-time is enough.

The place of morality
If I have no control over this change, my actions are irrelevant - there is no need to try to regulate myself. By extension, I cannot be condemned or commended for any action I take.
If I have any control over this change, I must determine what I should do (morality).

see hume's guillotine. it does not follow from the fact that you exist, that you therefore ought to perform any particular action, or that any potential action is preferable to another

i feel like this thread is going to be filled with people talking past each other, because the word "morality" is one of those very tricky words, like truth, and virtue, and justice, that has no substantive. everybody attaches a different significance to the word, and interprets it according to their own significance. we all think we're talking about the same thing, when in fact we may be talking about completely different things, in which case all our talk is nonsense. perhaps we could drop the word altogether, then attempt to define our individual significances and use invented words for each definition, for consistency. but i guess that's as useless and impossible a discussion as that in which we are currently engaged

fffff anyway

your system is mostly self-consistent, but it rests on a few faulty assumptions and thus there is no reason for anybody to believe it, although they may prefer it.
 
Maybe a better first principle would be:

I think. I could be an illusory disembodied consciousness, in which case it would not matter what I think. Or, I could be something for which the thought I experience matters.
Read: Pascal's fuckin wager
Perhaps there is no reason to pick one over the other. But why not pick the second, because there is a chance that your life might have meaning after all?
 
^you could also be a computer program

In which case, would the result of your thought matter? If you are a limited or purpose-oriented computer program, you would lack meaningful self-awareness because it would not be efficient. If you were an artificially-created intelligence, you'd follow the latter chain of reasoning.
 
Whatever you choose, you cannot get around the fact that you (or I at least, I can't prove your existance) are a thinking being, whether that being is artificial, in the matix or in a world it can influence. For be to be able to think, I must be something: whether I am what I think I am or not is up to question. There can be no action without something taking that action, or no action can be said to have taken place, since there is no effect.

If I have an influence over my surroundings, I must determine how to exercise that influence, since such a decisions is unavoidable (even not influencing them required a decision to be taken). By virtue of my ability to influence my surroundings, there may be a way to determine how to influence them. Therefore, in order to establish whether such a system exists, and if so what it is, the requirements of a moral system must be determined (i.e. the same output for the same inputs, and an output for every possible combination of inputs), at which point we move on to my justification for rationality as the only possibly valid system to determine it.

Do please elaborate on "Kant's space-time googles". I thought the argument was rather sound.

Regards,

Ascalon
 
Morality is circumstantial. There is no way to determine the "right" morality in any given situation unless you determine enough parameters.
 
Whatever you choose, you cannot get around the fact that you (or I at least, I can't prove your existance) are a thinking being, whether that being is artificial, in the matix or in a world it can influence. For be to be able to think, I must be something: whether I am what I think I am or not is up to question. There can be no action without something taking that action, or no action can be said to have taken place, since there is no effect.

If I have an influence over my surroundings, I must determine how to exercise that influence, since such a decisions is unavoidable (even not influencing them required a decision to be taken). By virtue of my ability to influence my surroundings, there may be a way to determine how to influence them. Therefore, in order to establish whether such a system exists, and if so what it is, the requirements of a moral system must be determined (i.e. the same output for the same inputs, and an output for every possible combination of inputs), at which point we move on to my justification for rationality as the only possibly valid system to determine it.

Do please elaborate on "Kant's space-time googles". I thought the argument was rather sound.

Regards,

Ascalon

Just because one has determined that there is thought does not imply that one's thought has influence on the world. The action that leads to thought is probably not controlled by the thinker, nor is it likely to be the result of the thinker's actions - nothing alive now has independently given itself the ability to think.
 
That's weird, because I could swear most animals aren't consciously aware of the same things we are and that we have evolved consciousness independently! Or maybe they are to some extent, but not the same (how do you even fucking measure consciousness anyways)
 
That's weird, because I could swear most animals aren't consciously aware of the same things we are and that we have evolved consciousness independently! Or maybe they are to some extent, but not the same (how do you even fucking measure consciousness anyways)

I mean, we can't even be sure that everyone around us is conscious (oh the solipsism!)

My argument was just trying to generalize from life, offer some concrete basis for my arguments in an attempt at change of pace from pure abstraction. Not like my argument is destroyed if solipsism holds true.
 
That's not what I mean. Consciousness was evolved independently. It is clearly established that a worm is not as fucking conscious as a human (even though consciousness is something hard to measure objectively). I mean that is clearly obvious that evolution has done that.
 
Whatever you choose, you cannot get around the fact that you (or I at least, I can't prove your existance) are a thinking being, whether that being is artificial, in the matix or in a world it can influence. For be to be able to think, I must be something: whether I am what I think I am or not is up to question. There can be no action without something taking that action, or no action can be said to have taken place, since there is no effect.

uh no. you still haven't provided the grounds on which you believe that thought implies the existence of a thinking being, except a common sense gut feeling (which, though i can't accept, i sympathize with). i know this seems like a frustrating, nit-picking application of radical doubt, but we don't get to pick and choose which principles we're going to blindly accept

even if you had provided grounds, you haven't said anything about the form of your existence - rendering it meaningless. we know nothing about what kind of existence your doubt supposedly implies. something about which nothing can be said is the same as nothing.


If I have an influence over my surroundings, I must determine how to exercise that influence, since such a decisions is unavoidable (even not influencing them required a decision to be taken). By virtue of my ability to influence my surroundings, there may be a way to determine how to influence them. Therefore, in order to establish whether such a system exists, and if so what it is, the requirements of a moral system must be determined (i.e. the same output for the same inputs, and an output for every possible combination of inputs), at which point we move on to my justification for rationality as the only possibly valid system to determine it.

rationality is the only rational decision-making system, not the only valid system. plenty of other consistent and self-justifying systems exist that entirely reject rationality. you can't say at this point that your system is the only valid one. i think the most you can say is "if we take it as writ that reason must be our guiding principle in all matters, my system represents a working explanation and application of that principle"

Do please elaborate on "Kant's space-time googles". I thought the argument was rather sound.

kant pointed out that everything we experience, we experience through the prism of human existence. being human is like wearing a pair of goggles we can never take off. (i think he said spectacles, but hey, goggles is more fun.) because we can never take the goggles off, we can never examine human experience from a position outside of human experience, so we cannot judge it free of the limitations of our own human intellect

there is no reason to suppose that the human intellect is capable of understanding everything in the universe, or that it interprets the universe correctly. kant's specific application of this idea was that though we experience absolutely everything as existing in both space and time, we have no reason to suppose that space and time objectively exist. if they do exist, we have no reason to suppose they resemble our perception of them in the least

also we will never be able to determine the truth about any of this, because we are incapable of examining ourselves from a position outside of our possibly-flawed human exprience

(for a delightful evening of crushing intellectual horror, try applying the human-goggles principle to basically anything you have ever thought)

again, this doesn't invalidate your argument, because you only need to prove that we experience space-time for your argument to follow. it does invalidate your principle that space-time must exist.

Regards,

Ascalon

to clarify, i'm not a big fan of kant, though i sound like i am. i think he was a genius, and his philosophy is brilliant right up to the point where it becomes instructive - devolving into a ridiculous and ill-constructed attempt to objectively justify the beliefs of his lutheran upbringing. like most instructive philosophers, his best insights are often overlooked, while his ethical conclusions tend to be lauded by those who would agree with them regardless of his arguments.
 
Darkflagrance said:
I mean, we can't even be sure that everyone around us is conscious (oh the solipsism!)

Solipsism is kind of self-defeating. Obviously, you do not have conscious control over your surroundings. So there must exist some process that produces the environment that you navigate, one which you do not control consciously. But in the absence of evidence that you can have an influence on it, there is no logical reason to consider that it would be part of you. "You" is not something that's defined metaphysically as a single indivisible unit, there's no reason to make it include processes that you have no control of. So "solipsism" and "you + an external process which builds a coherent world around you" seem to be equivalent propositions (in other words, a simple play on definitions can make solipsism impossible).

Furthermore, if you observe people around you that you interact with extensively, seem conscious, behave like you do, it's likely that the process, to some extent, simulates consciousness. Since the difference between simulating a process and executing that process is not quite clear, it seems like people around you might be conscious irrespective of the assumption of solipsism.


Doctor Heartbreak said:
uh no. you still haven't provided the grounds on which you believe that thought implies the existence of a thinking being, except a common sense gut feeling (which, though i can't accept, i sympathize with). i know this seems like a frustrating, nit-picking application of radical doubt, but we don't get to pick and choose which principles we're going to blindly accept

In so far that a "being" would be defined by what it thinks (which makes sense), thought does imply the existence of a thinking being. Namely, the thinking being that the thoughts define. None of these issues are really about existence at all, they are about the definitions of words.

even if you had provided grounds, you haven't said anything about the form of your existence - rendering it meaningless. we know nothing about what kind of existence your doubt supposedly implies. something about which nothing can be said is the same as nothing.

What is a "kind of existence" and why does it matter?

rationality is the only rational decision-making system, not the only valid system. plenty of other consistent and self-justifying systems exist that entirely reject rationality. you can't say at this point that your system is the only valid one. i think the most you can say is "if we take it as writ that reason must be our guiding principle in all matters, my system represents a working explanation and application of that principle"

Humans are ad hoc machines driven by compartmented beliefs and reasoning. Take any given human, including you and including myself, and I guarantee you that neither their belief system nor their decision-making system are consistent, even if they are otherwise rational. Consistency only matters in so far that it is useful. If the answer to a question doesn't matter, it is an unreasonable constraint to expect a system to agree with itself on it. That's why, on irrelevant matters such as the existence of God, belief systems such as faith can clash with rationality and effectively make for an inconsistent belief system.

kant pointed out that everything we experience, we experience through the prism of human existence. being human is like wearing a pair of goggles we can never take off. (i think he said spectacles, but hey, goggles is more fun.) because we can never take the goggles off, we can never examine human experience from a position outside of human experience, so we cannot judge it free of the limitations of our own human intellect

The problem is that it is not clear at all that it means anything for anyone to "examine" or "understand" anything outside of human experience. It is not fair to take human concepts such as truth, existence or understanding and transpose them outside of human experience in order to make a point about other concepts. So in the end you're saying that we can never do some undefined thing about some system from a perspective that's outside of that system (a worthless proposition at best).

Human intellect only has the limitations that it perceives it has. If we cannot perceive, concretely, any limitations to human intellect, then it follows that it has none, because to say otherwise would imply that the words we use are unintelligible for us - and that would be a failure of semantics, not a failure of our intellect. To put it in another way, should we witness a system which claims that it has no limitations, yet it has from our perspective, we have to understand that it just might be the case that the system indeeds has no limitations with respect to the concepts they can cognize. So from within the system, they would be right to say they have no limitations. Now imagine that they cannot cognize the concept of a system like ours, and they cannot cognize the concept of systems such as ours existing - when they say "system", they don't mean what we mean by "system". When they say "truth" they don't mean what we mean by "truth". When they mean "to exist" they do not mean what we mean by "to exist". From this, it ensues that from their system we effectively do not exist and they are right to say that we don't. Should we manage to make them understand what we are, they would apply their concept of existence to us and they would conclude that we do not exist. And they would still be right because what they mean by "to exist" isn't what we mean (and it would be silly to deem one concept "better" than the other - they use theirs, we use ours, everybody's happy, that's where it ends). For these reasons, we might be able, eventually, to conclude that the human intellect has no limitations. And we could very well be right.

there is no reason to suppose that the human intellect is capable of understanding everything in the universe, or that it interprets the universe correctly. kant's specific application of this idea was that though we experience absolutely everything as existing in both space and time, we have no reason to suppose that space and time objectively exist. if they do exist, we have no reason to suppose they resemble our perception of them in the least

There is also no reason to suppose that "existence" is a viable concept and that objective reality is ontologically necessary. There is no reason to suppose that there is any meaningful difference between "objective space-time" and the perception thereof. There is no reason to suppose that there is only one "correct" way to interpret the universe. When you are up to that point, honestly, everything is kind of arbitrary. "To exist" is very well defined colloquially when you're talking about shoes, a rare edition of a book or a historical figure. For these things I can list criteria to determine existence. When it comes to knowing whether the universe, time or space exist or not, frankly, I'm not sure that's intelligible. What are the criteria to determine that these things exist? I think metaphysical concepts are, for the most part, undue generalizations - a shoe can exist, but "time"? What the fuck does it even mean for time to exist?

also we will never be able to determine the truth about any of this, because we are incapable of examining ourselves from a position outside of our possibly-flawed human exprience

There exists no position from which any truth can be "determined". For any set of observations there exists an infinity of matching models. Hence, uncertainty about truth is irreducible in all cases. Also, what does "flawed" means about human experience? It doesn't seem that anything short of seeing things that make us run into walls and hurt ourselves would qualify as a flaw.

(for a delightful evening of crushing intellectual horror, try applying the human-goggles principle to basically anything you have ever thought)

Considerations that do not impact the way I'd live my life can't be crushing or horrible. At best, they are amusing. Frankly there's little that could surprise me.
 
Of course, there is one thing to be assured of in the case of rational reasoning if there hopes to even be an outcome:

Every party must consider and deduce every part of each other party's speech, no matter how miniscule, let alone just blocking out the whole thing.
 
Back
Top