From a derivation point of view, it looks like Microsoft uses the normal distribution, while Glicko uses the "extreme" distribution. Practically speaking, there is no difference here (and given the fact that the "true skill" link is a non-technical paper, it may infact have been derived from an extreme distribution, but they're not telling us). According to Glickman, the extreme distribution is just easier to derive all of these formulas, while in practice it doesn't make a difference. Now, the interesting thing is that in the Glicko paper's I've read, part of the calculations is an "expectation" calculation... a function that calculates the probability that player A wins a match (A vs B). It is absent from that paper, but again, that appears to be non-technical. It just gives the formulas for "TrueSkill". As for this vs Glicko2... TrueSkill: On the other hand, Glicko2 realizes that you've been winning too many games in a row and will make it a little faster. Finally, the last difference is trivial. Glicko/Glicko2's default scale is 1500 == average player, 350 == starting uncertainty. In all the examples of TrueSkill, it seems like the default scale is ~30 for the score, and the default uncertainty is ~8.