it's cool that you think all this, but you haven't given us any reason to agree with you. why should a moral code expressed in simple axioms be superior to one expressed in complex ones? just because you say so?
Because one cannot develop a coherent moral theory if you start including various exceptions, or loopholes, etc, or the principles you adhere to are prone to be different for different situations. Like "don't kill people who aren't threatening you" suddenly becomes "don't kill people who aren't threatening you, except when...". That's the kind of thing that automatically dooms a moral system.
You have to compare them qualitatively too. If a religion gives me a shitty moral system that harms me and others around me I don't give a shit if it's stable. That was the actual point I was making and you did nothing to prove it wrong.
I really doubt such a system could be logically consistent, or actually present itself as a moral system. I suppose it's been done before with, say, the Indian Caste system, but even that was never that explicit about who would belong to each group (for all we know, the castes could simply describe people who got to their positions on relative merit without any state/religious coercion involved, but developed otherwise because of the leaders exerting their power to the detriment of others).
What about self-defense? War? Death penalty?
I meant "do not kill other than in self-defense". My mistake; I usually assume "do not kill" defaults to that. In any case, self-defense is always moral assuming the threat is legitimate, offensive war is absolutely illegitimate (though it gets sticky when we get into intervening in order to defend others), and the death penalty is not moral because it is retributive, not self-defensive.
A moral rule such as "don't eat pork" because it is unsanitary stops being pertinent when eating pork stops being unsanitary.
That's not a moral rule though; that's a societal preference. Morality generally regulates relationships between humans.
If, in the future, humans became part machine and that it was possible to "back up" brains like any other data, "killing" would not be nearly as bad as it is now, thus shifting the moral boundaries.
This would only eliminate half the issue with killing - while it would not permanently destroy someone (and this assumes that all brains are backed up), it would still be an act of aggression and thus immoral, similar to how punching someone without cause is immoral even though it generally doesn't kill anyone.
If, in the future, every good could be infinitely reproduced at a whim like digital goods, it would not make much sense for anyone to steal anything and theft would probably cease being immoral.
Morality is relative to people (because it benefits them) and thus it has to adapt to changes in people (changes in what benefits them).
The greatest concession I can give to pragmatism is that morality benefits humans because human nature (whatever you believe human nature is derived from) is generally fixed and unchanging, and thus morality must be generally fixed and unchanging. Also, I find that "moral laws" that condone slavery and oppression are actually
violations of fundamental morality, instead of moral systems proper.
Correct, but I would prefer to say that the
concept of theft would cease to exist, as theft by definition must deprive someone else of a good. If one cannot deprive someone of a good, then one cannot steal from them.