I think emotion and logic are not very well defined in this thread to start with.
There are two separate cognitive processes at hand here: first, you have to determine what is the objective, and then you have to determine what are the steps to take in order to achieve this objective. In principle, the former is in the realm of emotion and the latter is in the realm of logic. For instance, "eating ice cream" might be the objective, and it is fully emotional; then, logic might tell you that your fridge is empty, the ice cream stand nearby is closed, and thus you need to go to the supermarket.
However, objectives often clash with one another: for instance, maybe you want to lose weight. That's emotional, because you would only do this if you felt bad about your appearance or felt terrible physically. But then logic tells you that you should not eat ice cream. In this case, logic tells you about the (in)compatibility of your various objectives, and then you need to figure out which one you care more about. The tricky part is often to weigh current emotions against future emotions: clearly, you will feel better now if you eat ice cream, but in the future you will feel worse, and usually it seems like a good idea to maximize your own happiness over time. Failure to properly weigh the present against the future is often said to be "emotion clouding reason", though in fact you are just badly optimizing your future emotions. In the end, objectives are always emotional.
Similarly, the idea to make the world "as good as possible" is ultimately grounded in emotion: you only do it because it makes you feel good about yourself, or because you think that working for the good of humanity will make humanity pay you back. Working to make the world better, when you don't really care about it emotionally, will just make you miserable, and no amount of logic can make you care about something if it can't be linked to anything you already care about.
On the other hand, if you do care about the good of humanity, then you need logic to tell you what will work and what won't work. For instance, if you have no other information, killing off one person isn't as bad as killing off a thousand; but maybe that one person could cure cancer, or maybe the earth suffers from overpopulation, and then you might decide differently. However, you might easily run into hard limits: for instance, it is logically defensible that killing off a huge chunk of the population might be a good thing in the long term, since it would pre-empt overpopulation problems. However, most people would be incapable of conceiving that "pruning" the population might be in humanity's best interests (realize I'm talking about nuking entire cities for the sole purpose of population control). This wouldn't be a case of emotion clouding reason, because in practice there are hard limits to how much people can care about humanity without losing it. Only certain compromises are acceptable, and that's fine.
That is completely false. In a group with altruistic genes, because of lower internal strife and fairer resource distribution, individuals have a greater chance of survival. Thus, an altruistic group has a greater chance of growing larger than other groups, and will prevail in the long term. In general, evolution would predict that people will naturally care a lot for their immediate family, care about people they know, care a little about people in their community or with a similar genetic makeup, care a tiny little bit about the rest of humanity, and barely give a damn about anything else.
There is absolutely no logical reason why a God would reward believers and punish non-believers, rather than doing the exact opposite. For instance, you are making the completely unfounded implicit assumption that God does not love irony. If he does, I am sure you can imagine just how screwed you are right now.