To what extent does risk have inherent value?

Curious to see what Smogon's take on this. Here's the most salient scenario I've read about illustrating the concept:

You are an avid stamp collector and there's a rare Elvis stamp you've had your eye on for a while. It is also cold outside and you don't own a pair of gloves. A mysterious dealer offers you the choice between two deals:

Deal A - The dealer flips two coins. If the first coin comes up heads, he gives you the Elvis Stamp. If the second coin comes up heads, he gives you a pair of gloves.

Deal B - The dealer flips one coin. If it comes up heads, he gives you the Elvis Stamp. If it comes up tails, he gives you a pair of gloves.

Would you prefer one deal over the other? If so, why?

If you preferred one deal over the other, how much money would you pay the dealer to take that deal over the unpreferred option?

If there was an accepted collective intuition about the value of this kind of risk, what would be the most useful way to characterize it? Is it a bias which can be fixed by math education, or a failing of math to correctly evaluate the scenario? If the former, why might we have this bias? If the latter, what kind of formulation makes the most sense?

I don't think math fails to describe this situation because the utilities of the actions can reasonably be added under the assumptions. In other words, having both the stamp and gloves feels as good as not having either might feel bad. If I was forced to choose I would pick Deal A because I think it's weirder to entangle the states of having the stamp and having the gloves together, but I wouldn't pay money for it.
 

vonFiedler

I Like Chopin
is a Forum Moderator Alumnusis a Community Contributor Alumnus
It's a 50/50 chance to get something I can sell on ebay either way, but if I don't get the stamp, at least I get the gloves with option B. I don't really give a shit about the gloves either way. I have almost never worn gloves due to it being cold.
 
It's a 50/50 chance to get something I can sell on ebay either way, but if I don't get the stamp, at least I get the gloves with option B. I don't really give a shit about the gloves either way. I have almost never worn gloves due to it being cold.
I mean the assumption is that these are things that you want, you can replace the stamp and gloves with any two things you want for entirely unrelated reasons.
 
Ahh I love this topic!

Every time when we analyse risk I utilise something called the risk assessment matrix which determines both the likeliness of the risk striking and the severity associated with the risk. Now obviously low chance of happening and low severity is ideal while high chance and high severity is the worst kind of scenario but depending on the discipline you are applying this matrix to you often need to make your own judgement.

As for your case B seems mathematically more gaining because you have 75% chance of getting at least one while 25% chance of getting both for A while you have a 100% chance to get something for B. This means A only has a value over B because it give you a 25% chance of obtaining both items while at the same time a 25% chance of getting neither. So unless you want to go home empty handed once in every four times go with B.

Unless you consider the value for both items not to be identical to me the latter scenario seems more beneficial for the player than for the dealer.

But hey I don’t recommend gambling anyway, it’s no different to pouring money down the drain.
 
Last edited:
With Deal A there's a chance that you'll get both but there's also going to be that chance that you'll get neither. With Deal B you'll always get something out of the deal whether it's something you want or something you need. I'll go with B. I'm not really a risk taker / greedy and with B, I'll always get something out of it.
 
With Deal A there's a chance that you'll get both but there's also going to be that chance that you'll get neither. With Deal B you'll always get something out of the deal whether it's something you want or something you need. I'll go with B. I'm not really a risk taker / greedy and with B, I'll always get something out of it.
See that's what I'm curious about. When you give this kind of scenario to a lot of people, there's an overwhelming sentiment that being guaranteed to get something is better than an equivalent risk which is expected to do as well. You can even twist it so the risk is better on average and people will still take the safe bet (this being where classical decision theorists would call the intuition a bias or a paradox).

But if that value of taking the safe bet is quantifiable in some way, the new math does a better job of describing reality. Should one reconstruct payoff matrices based on relative value instead of absolute value? Should you treat it like a heuristic like a repeated game strategy (ala tit for tat in prisoner's dilemma)? Should you profile each decision maker with a risk curve?

The reason I asked the money question is because if the safe option is worth something inherently, I'm interested to know how much when compared to value of the things you could be getting.
 

internet

no longer getting paid to moderate
is an Artist Alumnusis a Forum Moderator Alumnus
There's inherent value in risk when it's about things that otherwise have little value. This value comes from the excitement involved in risk. When we're talking about valuable things, I'm more likely to be stressed out by risk than excited.

In this particular example, i think i would value the elvis stamp far above the gloves (or, if i'm genuinely at risk of frostbite, the gloves far above the elvis stamp). If I value either object strongly above the other, the 50/50 flip is the best choice by far.
 
you can measure your risk aversion and estimate your utility function to choose between decisions that have the same expected value (done by bisecting the curve by finding values that make the two choices indifferent and then bisecting again).
a less complicated way could be to use other decision rules such as MaxMin (minimizing the loss in the worst case) or MaxMax (maximizing the win in the best case scenario).

you can check out Ward Edwards if you are interested in SO and decision theory
 
essentially what you've created here is that deal b is just deal a plus zero mean noise (so that after any realization x in deal a, you add some random distribution Hx with mean zero and the combined outcome is the realization x plus the realization of Hx). more intuitively what you've done is taken mass from the center of the distribution and moved it to the outside while preserving the mean

there is a concept in economics of second order stochastic dominance that is that if you can perform a mean preserving decrease in risk of a distribution G (say new distribution F), and if any person with concave utility prefers F over G, then F SOSD G.

and the amount of which you prefer F to G depends on a variety of measures. you can measure risk using arrow pratt, that is the standard technique used in econ literature
 
that being said it is not the case that SOSD directly implies preference in that portfolio of assets if you have the choice of multiple portfolios. in fact i remember the prelim in one year dealt with a case in which you had a riskless asset and a risky asset, the risky asset had a FOSD increaes in payoff but the investor actually decreased the size of the risky asset in portfolio. the point there was that the investor would want to rebalance risk so that his overall portfolio wuld be a MPS with risk decrease over his previous allocation

stronger conditions than SOSD exists like monotone likelihood ratio (doesn't paply here because your support is discretei think)

monotone comparative statics with risk is very interesting. let me know if you want to talk about it
 
so in essence: deal a is deal b plus zero mean noise if the coin flip of (-stamp, +gloves) and (-gloves, +stamp) are both zero mean. In this case, so long as your utility is concave which is the usual assumption, then you would prefer deal b due to SOSD, and would actually pay to get deal b over deal a depending on your risk aversion.
 
I've read a fair bit of decision theory but far less econ. I don't put mich stock in the simpler MaxiMin type stuff but modeling the risk aversion of agents does interest me (i.e. how one would do that properly and whether one ought to be risk averse within a certain threshold)

apricity When you say concave utility function are you talking about marginal utility or utility as a function of something else? In any case it's nice to see SOSD spelled out formally. I'll look up monotone likelihood ratio and I would be interested in learning more about monotone compararove statistic with risk.
 
utility is always a function of inputs. in this case it is a function of how many gloves or elvis stamps you have. concave utility is a natural assumption to model how people actually behave. we normally work with the three assumptions for comparative statics questions that your utiity is twice continuously differentiable, that your util is strictly concave (in the normal sense that we say a function is strictly concave), and that the consumption set that a user demands given the parameters of the problem lie within the interior of his budget set. as you might expect these three assumptions set us up to use traidtional kuhn-tucker methods of solving problems and they will guarantee a unique maximizing solution to the consumer problem.
 
i think the thing that will interest you the most is arrow-pratt measures of risk aversion and DARA/CARA kind of stuff
 
Editing my response sine I misunderstood the question.
I would take option A. There is a 75% chance to get at least one and 25% chance to get both. Heatran should teach me to take option B, but Ill still take a 25% chance to get nothing because the reward is worth it IMO.
 
Last edited:
Who cares?

I don't mean this in a rude way, but in a "why is this relevant to the real world" way. Theoretical economics is good and all and I have a lot of respect for academia, but in my humble opinion it mostly fails to adequately deal with scenarios of the real world. The scenario you describe makes it difficult to connect it and the questions you pose surrounding "this kind of risk" (which I'm really not sure what you're talking about) to risk in general in the world. Your take is fine for the scenario you presented but the scenario you present isn't a real world decision. What happens when the payouts can't be adequately calculated by mathematics/economics/whatever?

I think there's probably a pretty interesting discussion to be had on real life risk but I'm not sure that's the direction this thread is trying to go in. I have no problem in analyzing your question from a theoretical vantage point but I just want to point out that I think there's also an interesting discussion to be had in broadening the topic as well.
 
Who cares?

I don't mean this in a rude way, but in a "why is this relevant to the real world" way. Theoretical economics is good and all and I have a lot of respect for academia, but in my humble opinion it mostly fails to adequately deal with scenarios of the real world. The scenario you describe makes it difficult to connect it and the questions you pose surrounding "this kind of risk" (which I'm really not sure what you're talking about) to risk in general in the world. Your take is fine for the scenario you presented but the scenario you present isn't a real world decision. What happens when the payouts can't be adequately calculated by mathematics/economics/whatever?
there must be something that i'm missing here because it's very important to know how to calculate risk in finance for starters. this isn't a hypothetical ivory tower thing, this is something that investors use when creating portfolios. many billions of dollars is based on "how do we measure risk". and it's a great question because there is some element of philosophy behind this mathematical question.

in fact i think i'd happily take the stance that theoretical economics is much more "real world" than payouts on a personal scale that can't be described by math/econ, on the basis that one of the problems carries more weight than the other
 
Last edited:

Ullar

card-carrying wife-guy
is a Smogon Discord Contributor
im a sucker for risk, i LOVE booster packs of minis/cards/etc.

that could be a poor fiscal indicator though, or the early stages of a gambling problem. explains my d&d obsession anyway
 

Soul Fly

IMMA TEACH YOU WHAT SPLASHIN' MEANS
is a Contributor Alumnus
Who cares?

I don't mean this in a rude way, but in a "why is this relevant to the real world" way. Theoretical economics is good and all and I have a lot of respect for academia, but in my humble opinion it mostly fails to adequately deal with scenarios of the real world.
Hey fwiw if you end up on a game show someday knowing how risk operates can make you a very rich.

http://mathworld.wolfram.com/MontyHallProblem.html

Even just a basic understanding of the mathematics of risk could so dramatically improve society. For instance imagine how entire societies can make better decisions with just a simple intuitive understanding of Long Term v. Short Term, and how that would snowball intro a giant positive eject rippling through all our lives. Imagine people not getting cajoled into making terrible financial decisions. Maybe less youngsters trying out cocaine and cigarettes because they understand that there is an 80% probability that they'll become addicts. Such people maybe electing a politician who would be more inclined towards preserving the climate instead of clinging to oil and coal. Banks behaving more responsibly in the market, and being held accountable by their customers who understand the gravity of the risks being taken with their money. Deflation in average credit, rise in average mean wealth per capita, not to speak of the automatic wealth redistribution by the virtue of this correction in mentality; general wholesale debt reduction, individually and nationally. Better governments, better diplomats. Better, saner International diplomacy. Less war...etc etc etc.

Just because something is not a part of the regular imagination of what you call "the real world" doesn't mean it cannot or should not. The real world constantly changes.
 
Last edited:
Who cares?

I don't mean this in a rude way, but in a "why is this relevant to the real world" way. Theoretical economics is good and all and I have a lot of respect for academia, but in my humble opinion it mostly fails to adequately deal with scenarios of the real world. The scenario you describe makes it difficult to connect it and the questions you pose surrounding "this kind of risk" (which I'm really not sure what you're talking about) to risk in general in the world. Your take is fine for the scenario you presented but the scenario you present isn't a real world decision. What happens when the payouts can't be adequately calculated by mathematics/economics/whatever?

I think there's probably a pretty interesting discussion to be had on real life risk but I'm not sure that's the direction this thread is trying to go in. I have no problem in analyzing your question from a theoretical vantage point but I just want to point out that I think there's also an interesting discussion to be had in broadening the topic as well.
Well if you ignore the math aspect, do you think that all other things being equal, that there's value in "getting something" when compared with similar decisions with higher risk?

I took the decision theory/philosophy angle over the investment/economics angle precicely because it's more relatable. Optimizing money isn't exactly easy but it is a scalar, lending itself very well to quantitative analysis. If that value of risk holds for things you might want find difficult to evaluate mathematically or don't necessarily want to have multiples of, that might better hint at a greater philosophical truth that begs the question of how we express that truth.

I would be very interested in what real life scenarios you have which expand the discussion, and I'm not afraid of broadening the topic.
 
there must be something that i'm missing here because it's very important to know how to calculate risk in finance for starters. this isn't a hypothetical ivory tower thing, this is something that investors use when creating portfolios. many billions of dollars is based on "how do we measure risk". and it's a great question because there is some element of philosophy behind this mathematical question.

in fact i think i'd happily take the stance that theoretical economics is much more "real world" than payouts on a personal scale that can't be described by math/econ, on the basis that one of the problems carries more weight than the other
I have a very "non-academic" opinion in that I know it's probably wrong but I have it anyways. I think finance is greatly overrated in terms of what it provides to society and as a result, I don't really find that the commoditizing of risk to be something of particular importance to society in general beyond people in insurance/finance/whatever. My wording was super broad (and honestly I was thinking about throwing something in there about insurance anyways) so I don't really mind your take but I think it's important to make clear that I'm thinking about risk from a broader perspective of how it relates to societal problems, whereas the risk in finance that you are referring to is more of a localized problem in the sense that it's an issue only for certain groups. Obviously the scenario Blazade has given isn't "real world" and insurance is clearly more "real world" than it but the "real world" I am referring to is more society as the whole. Again that's my b for my wording not being clear enough (but I'm glad that it spawned some interesting posts so it's all good I think).

You don't need to respond and tell me how finance is good for society because I get it (although you can if you want!). Finance/the free market/capitalism are all fine and good and I know how I should feel about it rationally but I don't just based on biased personal experience I suppose. No disrespect to the finance people out there anyways (although why should they care since they're making $$$!)
 
ok. i'll say one thing about insurance though that i think is interesting. insurance is one of the few welfare increasing behaviors that arise naturally. it is like sharing erasers in elementary school.

https://www.jstor.org/stable/2951659?seq=1#page_scan_tab_contents

please take a quick second to read the abstract of this famous paper

very few financial instruments arise naturally. bonds are not natural. neither are derivatives, neither are interest rates. but insurance! arises naturally in village economies because there is an inherent need to manage risk even in the most impoverished and underdevloped countries. because risk is an inherent part of life and there is a need to manage it.
 
Hey fwiw if you end up on a game show someday knowing how risk operates can make you a very rich.

http://mathworld.wolfram.com/MontyHallProblem.html

Even just a basic understanding of the mathematics of risk could so dramatically improve society. For instance imagine how entire societies can make better decisions with just a simple intuitive understanding of Long Term v. Short Term, and how that would snowball intro a giant positive eject rippling through all our lives. Imagine people not getting cajoled into making terrible financial decisions. Maybe less youngsters trying out cocaine and cigarettes because they understand that there is an 80% probability that they'll become addicts. Such people maybe electing a politician who would be more inclined towards preserving the climate instead of clinging to oil and coal. Banks behaving more responsibly in the market, and being held accountable by their customers who understand the gravity of the risks being taken with their money. Deflation in average credit, rise in average mean wealth per capita, not to speak of the automatic wealth redistribution by the virtue of this correction in mentality; general wholesale debt reduction, individually and nationally. Better governments, better diplomats. Better, saner International diplomacy. Less war...etc etc etc.

Just because something is not a part of the regular imagination of what you call "the real world" doesn't mean it cannot or should not. The real world constantly changes.
Thanks for your post. What you've said here is more along the lines of what I'm interested in when thinking about this topic. I don't disagree with what you've said here at all but I'm not sure exactly how much it relates to what Blazade has said in the OP. What you're getting at to me is less about how we conceptualize risk but in how we educate individuals about the nature of risk and teach them to make rational decisions. I absolutely support that and I think the heart of what you're talking about has to do with education. I think that's a noble pursuit and of course I'm onboard with it. Even if just one of those things could come true...

The difference between what I'm thinking about and what you've said here has to do with how I've interpreted Blazade's initial posting of the "problem" and his "solution." I think you've expanded what he's said and made the argument (if I'm reading you correctly) that if we teach individuals the mathematics of risk, we can help people and thus societies make better decisions. This is good. Again, I'm for that. But my issue then is that I feel like what your post is implying is that we need to make everyone rational actors when 1. that's not possible and 2. do we really want to do that? (in the sense that, is this really the best and only approach possible?)

I say 1 isn't possible not because objectively it isn't but because I believe it's not a realistic or the only way to approach the problem of risk. For example, when you talk about less people doing cocaine/cigs, I don't think people think about that decision as "Well I'm weighing 80% adds to get cancer if I smoke this cigarette..." I think humans have demonstrated that they often do not and cannot act in a rational manner. It's not realistic to think that a mathematical understanding of risk will suddenly transform society, at least not in the near future. That's not to say we should give up on the idea, of course not. I just think that purely focusing on a mathematical understanding of risk can be problematic.

With 2, I guess I'm just thinking that there's some value in the irrationality involved in people's decisions. Not always of course, but I'm just wondering whether instead of focusing on what's "rational" and "right" whether there might be some merit in thinking more holistically about risk and how people respond to risk (instead of just that mathematical understanding). I suppose there's something philosophical involved in thinking about a world where everyone is a rational actor and what happens to their individuality but that's not really where I'm coming from.

I'm not really sure if I was able to better articulate my thoughts on this topic and I'm doubting the coherency of this but I think in my next post where I respond to Blazade I'll be able to better explain what I'm thinking about.
 
Well if you ignore the math aspect, do you think that all other things being equal, that there's value in "getting something" when compared with similar decisions with higher risk?

I took the decision theory/philosophy angle over the investment/economics angle precicely because it's more relatable. Optimizing money isn't exactly easy but it is a scalar, lending itself very well to quantitative analysis. If that value of risk holds for things you might want find difficult to evaluate mathematically or don't necessarily want to have multiples of, that might better hint at a greater philosophical truth that begs the question of how we express that truth.

I would be very interested in what real life scenarios you have which expand the discussion, and I'm not afraid of broadening the topic.
Like others have said, it depends on the "utility" I receive. I would rather have something than nothing, so in that sense I am risk averse. In that scenario I don't think there's a "right" mathematical answer.

I guess it's relatable in the sense that you can conceptualize the scenario but for me I suppose it's hard to relate to because it's not a plausible scenario. (I don't think that means there was anything wrong with your OP, it's a tough issue to describe and write about, something I'm still struggling with).

I suppose the risk that I am mostly thinking about that fails your scenario and your take is risk that I would define as "catastrophic." Catastrophic risk is risk that isn't calculable. The reason catastrophic risk makes a mathematical understanding of risk incoherent for me is that it's unable to define the nature of catastrophe. The consequences of a potential catastrophe are treated as unimportant because the risk is so negligible. And yet, catastrophes happen anyways and the risk becomes a reality. A mathematical understanding doesn't really treat these situations adequately in my opinion.

The big example is climate change. How do you mathematically evaluate the risk and consequence that result from climate change? The risk is different perhaps for a given individual but it's a societal problem. I think the mathematical risk approach implies that there's a singular solution that everyone needs to listen to which may not be correct and may not be the best way to think about risk because not all people act and think completely rationally. I think that's ok and instead of trying to fix "irrational" fears and concerns, maybe we need to listen to them and respond to them instead of moving forward as is because the risk is negligible.

I'm tired and can't really present anything better or a solution, but I would say that when approaching situations in which catastrophic risk is a problem that needs to be looked at, the input of individuals and local communities is especially important in thinking of ideas and solutions instead of just relying on experts and the "math." Hopefully this makes some sense.
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top