This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||
|
Thanks for your addition to the entry on the Ellsberg paradox. I must say I don't understand what you are getting at. Would you might modifying the entry to be a bit more clear? I am also worried that the explanation may mislead readers. The problem is not that individuals are choosing an option that has a worse expected utility for them, but rather that there is no utility function that can account for their behavior without including some sort of disutility for ambiguity (or someting). Anyway, thanks again for your interest! best, -- Kzollman 07:38, Jun 22, 2005 (UTC)
dear kevin, thx for your note. sorry, i haven't made myself clear - would you help me to clarify it? i would define 'mistrust to a stranger' as utility function - probably one could also interpret it as 'disutility for ambiguity', but i was trying to underscore that individuals behave reasonably when choosing to trust a relative and distrust a stranger. this seems to be the case, when we have mental reasoning that Y balls are less then 50% in first gamble and more then 50% in the second. the only constant i see, is a disbelief in possibility to win all, like in case with sister and brother. probably, i am defining 'utility' too broad, but then a question arises - should psychological terministic screens be used when one is dealing with economic terms? best twice, - unmet 22:36 jun22 2005
I seriously doubt that being 'cheated' is the most likely explanation of the paradox. In simpler studies of this type, when forced to choose between a sure thing and a bet, the expected value of the bet must usually be significantly higher before the person will choose it. This is usually explained as individuals just being averse to risk. In this case, taking the less known option gives a 50% chance of a worse option, for no greater pay, so risk aversion completely explains the participants behavior without needing to invoke the idea that the dealer is trying to cheat them. -- Kevin Saff 21:22, 14 July 2005 (UTC)
I have removed "The mistake the subjects are making..." In fact, the entire last section is speculative. Can we cite some sources on this? For instance, was this the explanation given by Ellsberg? If so, can we cite it as such? Fool 14:39, 15 August 2005 (UTC)
I think the probable solution is that many subjects misinterpret the scenario, just as I originally did! (I was in the process of writing a refutation, when I realised my mistake!) If subject assumes that they will be repeating the same gamble, using the same bag of balls each time (with balls being replaced before next draw), then, although A and B have same initial expected payout, A is less risky that B; similarly C and D have equal expectations, but D is less risky than C (the less risky gambles having the a fixed expection, whatever mixture of balls are in the bag - within defined constraints). In this case, rational people having any positive risk aversion prefer the lower risks; those with negative risk aversion prefer the risky ones.
Considering a single gamble is a bit artificial and "everyone" knows that probabilities only apply to averages of many runs of the "same" event. Even so, why should one assume that the "same" event implies using the same bag of balls each time (with chosen ball returned to bag), rather than a new bag of balls (or totally reconstituted contents) each time (albeit satisfying the specified ball distributions each time)? The analysis of the latter is the same as the one presented in the live page, but the former is what I initially assumed, perhaps because, subconciously, I knew it would be simpler to do in reality.
Most people probably do not choose their risk aversion rationally, especially in a hypothetical problem, but there is a rational attitude to risk, depending on the utility of the payouts to that person. With the above misinterpretted scenario, if, say, a below-average payout had no utility, then the low risk gambles are preferable; but if, say, only above-average payouts have any utility, then high risk gambles are preferable. (Strictly speaking, it may be rational to choose lower expections and various risks (variability) of monetary payout; but in terms of utility function, it is always rational to maximise utility, and risk (variability) of utility is irrelevent (personal preference being subsumed in the personal utility function).
John Newbury 22:36, 22 August 2005 (UTC)
The 'psychological explanation' that has been cut from this page resembles a simple (perhaps too simple) version of the drama-theoretic explanation of ambiguity aversion. I've added a link to this explanation. (I've also corrected the number of red balls from 20 back to 30. 30 is what it should be, without doubt.)
The drama-theoretic explanation invokes the 'personalization hypothesis'. This says that humans interacting with an entity in a signficant way will consciously or subconsciously personalize that entity -- ie, think of it as another party in an ongoing dramatic interaction. Luck (as in 'Luck be a lady') or death (as in 'Death has come to take me') are almost inevitably personalized in this way. Now what an experimenter presents as an ambiguity will, according to this hypothesis, be experienced as something that's chosen by the (personalized) experimental set-up. An assurance that the set-up will make an impartial, disinterested choice may be distrusted. The resulting 'trust dilemma' is reduced or eliminated by choosing an alternative that requires less trust to be placed in the experimental set-up. Nhoward 19:17, 23 August 2005 (UTC)
Two things. First, chosing the pairs A and D, or B and C respectively insures that you get $100 with no risk. I find that attractive and not at all irrational. Second, this is the Nash equilibrium (ie, the minimum amount you are sure to gain) for the game. -- KarlHallowell 19:46, 23 August 2005 (UTC)
I have removed the following text from the article:
The reference mentioned there is a posting on a web form by an unsigned author. It does not meet the requirements of notability. In addition, the explanation from the web forum is almost exactly the same as the explanation that is discussed above (and the consensus decision was that it should be removed). --best, kevin ··· Kzollman | Talk··· 16:37, September 2, 2005 (UTC)
I don't think this model fits reality. Many people love gambling or the lottery, even when they know that their expectation value is a loss; and there would be many more if they weren't held back by ethical/religious concerns. How can this model overlook this simple fact? Common Man 06:38, 28 October 2005 (UTC)
It's for a different draw on the same urn. -Dan 13:52, 17 January 2006 (UTC)
I don’t disagree with the general thrust of this article, but the “if and only if” argument above does not take into account the case when the two options are thought to have the same likelihood. At the risk of a double negative, we could say that Gamble A will be preferred if it is thought that a red ball is not less likely than a black ball. Even this way of expressing the rationale does not adequately take into account how a preference is made between two “equal” choices, measured by utility. Consequently, the mathematical demonstration should have greater-than-or-equal-to signs in the inequalities below. The conclusion is that B = 1/3, rather than a contradiction.
It should also be pointed out that there is consistency in the observed behaviour since the odds are known precisely for Gamble A and D, but have an unknown amount of variability for B and C. The inference that might be drawn from the observed results is that several factors are considered when making decisions, especially when there is little difference between one of the major criteria. A link to Portfolio theory might be a useful addition to this post, since Portfolio theory considers how one should consider the variance as well as the mean when making decisions.
I leave these suggestions to somebody more familiar with the topic to put into context.
In the first section, it says "it follows that you will prefer Gamble A to Gamble B if, and only if, you believe that drawing a red ball is more likely than drawing a black ball", or R > B. However, in the "Mathematical demonstration" section, ">=" is used instead of ">". Is this an inconsitency or am I wrong here? Rbarreira 13:18, 5 February 2007 (UTC)
In the mathematical demonstration, I don't think it's necessary to require U(100) preferred to U(0). The paradox holds as long as we consider that both U(100) and U(0) are constant throughout.
Running through the maths you can easily show that R[U(100)+U(0)] > B[U(100)+U(0)] in the first case and... B[U(100)-U(0)] > R[U(100)-U(0)] in the second.
Hence, it's not actually necessary to make a judgement over preferences beyond that A is preferred to B and D to C. Jamesshaw 15:30, 11 May 2007 (UTC)
Right: just noticed this is stated a little later, but still not sure why necessary to make this assumption in this section. Jamesshaw 15:32, 11 May 2007 (UTC)
Gambles A and D hedge each other. That would be the trivial solution to why they are both preferred. There could be 0 black balls, and yet there could also be 60. We would like to cover both possibilities. -- 76.217.95.43 ( talk) 02:47, 24 February 2008 (UTC)
Yes, picking both A and D is a hedge. But that doesn't explain why surveys show that most people actually do pick A and D.
Situation 1: You are allowed to pick either A or B (or neither) on one draw, and also allowed to pick either C or D (or neither) on the *same* draw.
Picking A and D is hedge -- if you pick A and D, then you get a guaranteed $100 no matter which color ball is pulled.
However, picking B and C is also a hedge -- if you pick B and C, you also get a guaranteed $100 no matter which color ball is pulled.
Situation 2: You are allowed to pick either A or B (or neither) on one draw. Then the ball is replaced, the urn shaken, and you are allowed to pick either C or D (or neither) on the *next* draw.
There's a lot more uncertainty here, but the expected return is the same as before: Picking A and D is a hedge, with an expected $100 for the 2 picks of Situation 2. Picking B and C is also a hedge, with an expected $100 for the 2 picks of Situation 2.
So why do surveys show that most people pick A and D, rather than B and C? -- 68.0.124.33 ( talk) 15:57, 20 August 2008 (UTC)
Look up at my section 'Why isn't it just comparing risk to knowledge?' and it should be obvious why people choose what they do. —Preceding unsigned comment added by 69.205.97.220 ( talk) 10:24, 2 March 2009 (UTC)
Previous discussion deleted and comments revised by OP
|
---|
==I think the math and economics are both wrong here.==
First, the math. When the article's author solves for the expected utility from a black ball (bet B), he violates Jensen's inequality. For bet B, the individual faces an unknown distribution between 0 and 2/3. With no information on the distribution, a rational individual will use the zero information probability distribution --> the uniform distribution (you can disagree with this if you want, but the result holds for any probability distribution, so there's no point). So with a discrete number of balls, the individual has a utility function over a discrete uniform distribution, or:
When the author writes utility as:
He's taking the expectation from outside the utility function and moving it to the inside (recall that the sum here is an expectation). But you can't do this. E(U(x)) is strictly less than U(E(x)) when individuals are risk averse. Now the economics -- individuals choosing B do not violate the expected utility hypothesis. They are rational risk averse dudes, preferring a tight distribution of payoffs to a fat one. More importantly, Ellsberg does not seem to argue this in his article. He argues that preferences do not reveal unique probability comparisons, as some other dude (Savage) had maintained. He's right. But I see this more as implying that the expected utility hypothesis does not imply Savage's axioms hold when comparing fixed and random payoffs. Regardless of Ellsberg's intent, the choice of B is perfectly consistent with risk-averse choices in the expected utility hypothesis. — Preceding unsigned comment added by 128.151.203.76 ( talk • contribs) 21:32, 3 July 2008 (UTC)
|
NOTE: Revised from first effort. Thanks for the comment.
If an individual faces a choice between U(x)=5 with certainty and a bet where there's a 0.5 chance that U(x) = 6 and a 0.5 chance that U(x) = 4, his expected utilities are equal. But if he's making many such bets in sequence, then he doesn't really have to pick one or the other. He can use a mixed strategy over time as though he were constructing weighted portfolios in each alternative. And it's possible that this mixed strategy will have higher utility than always picking one or the other.
Let's say a value maximizing investor has the option of allocating between two bets. Call them A and B. A delivers $100 with probability 0.5. B has two cases. Case 1 has 50% probability, and pays $100 40% of the time. Case 2 has 50% probability, and pays $100 60% of the time.
Say that the weights on A and B sum to 1. Call the weight on A = A, so the weight on B = 1-A. The investor's objective function looks like:
Which reduces to:
Taking the derivative with respect to A and setting = 0 to maximize, you get:
The optimal value of A will depend on the choice of U, but I'm pretty sure (though haven't proven) it will almost always be greater than 0.5 for risk averse investors. For illustration, suppose U(x) = sqrt(x). Then U'(x) = 1/(2*sqrt(x)). Inserting this function, multiplying both sides by 2, and taking the reciprocal of both sides gives:
Divide by 10 on each side to get:
Square to get:
Which gives A = 0.812744 > 0.5. So A gets a higher weight.
This doesn't necessarily imply that if you're forced to choose between A or B, you choose A. However, think of the experiment as being a repeated game and not a 1-off thing. Rational risk-averse individuals would choose A more often and B less often as if they were putting together a weighted portfolio.
I have to stop now. Comments/further development welcome. — Preceding unsigned comment added by 128.151.203.76 ( talk • contribs) 23:19, 4 July 2008 (UTC)
This is wrong. Look at the second equation.
You can write this as:
Taking the derivative and setting =0 will always yield A = 0.5. —Preceding unsigned comment added by 74.74.158.35 ( talk) 23:48, 17 July 2008 (UTC)
I kind of agree, but would state it this way: It makes absolutely no sense to analyze the bets under the assumption the gambler will makes a guess about the relative probabilities of yellow vs black balls, because no information is given. A proper analysis of the probability would include all possible ratios of black to yellow balls as part of the set of outcomes. From this point of view, although I'm being informal here, the probability of winning bet A is 1/3, while the probability of winning bet B is between 0 and 2/3 .. 0 if there are no black balls, and 2/3 if there are 60 black balls. The gambler prefers bet A if he prefers a bet with known probabilities, similarly the gambler prefers bet D if he or she prefers a bet with known probabilities. So it's not surprising surveys show most prefer bets A and D! Another aspect of this problem is that it's not well-defined as a problem in probability, because there is no information given about the distribution of possible ratios of yellow to black balls. It's like asking whether someone would prefer a bet of heads in which a fair coin is tossed, or a bet of heads in which a coin of UNKNOWN fairness is tossed .. in the first case, you know your odds. In the second case you have no information about the odds. (HOWEVER because you are told what your bet must be -- "heads" -- you might suspect the party proposing the bet has stacked the odds against you.) —Preceding unsigned comment added by 67.8.154.37 ( talk) 14:06, 3 March 2011 (UTC)
I think the user strategy is exactly risk aversion.
By choosing strategy (a) he is guaranteed to have 1/3 probability of winning, if he chooses (b) he might have 2/3 probability or he might have 0, depending on the game host.
Similarly choosing (d) he is guaranteed to have probability 2/3 of winning, whereas by choosing (c) he might have probability of 1, or 1/3 of winning. In both cases the probability of winning in scenarios (b) and (c) are determined by the game host, and users try to protect themselves against adversarial behavior... — Preceding unsigned comment added by 76.124.186.208 ( talk • contribs) 9 February 2009 (UTC)
I'm looking at the "paradox" and it seems obvious that this shouldn't violate any numerical reasoning. My argument: In the first scenario, A is a sured number (30) where B is the 'gamble', but in scenario two, it's the other way around, A is the gamble (30+?), but B is the sured number (60). How could someone interpret this as having a dissonance or not following a simple rule? I think if someone is more risk-taker than not, it's a B-C combo, but if they're not a risk taker, it's an A-D combo. Anyone care to help where a "paradox" comes in? — Preceding unsigned comment added by 69.205.97.220 ( talk • contribs) 10:22, 2 March 2009 (UTC)
"According to utility theory, if you have a preference for A then you should also have a preference for C." Citation needed. Utility theory predicts that goods distributions with higher utility are preferred. Because of the anti-correlation between the number of black and yellow balls, adding wins for yellow balls reduces the variance of the outcome for game B=>D and increases it for A=>C, while keeping expected value(A)=E(B) and E(C)=E(D). The results are fully explained by supposing that when expected value is equal, people prefer distributions with small variance to distributions with high variance. For instance, you predict the experimental results if you say utility(distro) = mean value / (stdv+1), which is a perfectly fine utility function. Philgoetz ( talk) 18:13, 12 February 2015 (UTC)
People are stupid and don't understand math. It's that simple. It's what keeps vegas going.
Or maybe people really do prefer a known bad deal to a deal that probably will be better, but maybe not.
I wonder if the results might vary by culture. —Preceding unsigned comment added by Paul Murray ( talk • contribs) 23:17, 11 March 2009 (UTC)
Chosing A and D is the most rational choice. With the aproach suggested in the article one could be easily fooled into poor choice by using 30 red and 60 black balls. When presented with the first problem, you would logically choose A. Clearly the approach used in the article leads to a poor choice, as chosing C in this situation leads to a lower chance of winning. It's irrational to assume that the choice you made in the first gamble was correct.-- Ancient Anomaly ( talk) 15:19, 15 December 2010 (UTC)
How is this a "paradox"? People don't like uncertainty. If someone's model fails to predict this aspect of human behavior, the model is flawed, but that doesn't make this a paradox. Please update the article to explain more clearly where the "paradox" lies, if there is one. 129.219.155.89 ( talk) 18:35, 14 January 2014 (UTC)
The guaranteed probability of getting a red ball is 30/90. The guaranteed probability of getting a non-red ball is 60/90. The probability of getting either yellow or black is undefined.
In A the probability of getting $100 is 30/90, in D it is 60/90. B and C is just trying your luck. — Preceding unsigned comment added by 94.197.127.135 ( talk) 09:01, 23 June 2013 (UTC)
"So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D."
Why does this follow? Based on this false assumption, you will come to the "paradox".
To understand the "paradox" it is important to understand the assumed conclusion. — Preceding unsigned comment added by 94.197.127.135 ( talk) 09:09, 23 June 2013 (UTC)
I would propose that the illustration of the paradox be changed to the one described here: http://ocw.mit.edu/courses/economics/14-123-microeconomic-theory-iii-spring-2010/lecture-notes/MIT14_123S10_notes06.pdf — Preceding unsigned comment added by Bquast ( talk • contribs) 09:03, 3 October 2013 (UTC)
Are you told both bets at the same time? If not, do you complete the first draw before being informed of the second bet? If you are told sequentially, you might choose A out of risk aversion, and then make your choice on the second bet influence by framing effects created by the first decision (Kahneman's book discusses this).
Furthermore, if you are told the bets sequentially, there remains the hypothesis that the person giving you the wagers is behaving sharply (i.e. trying to psych you out). Since you don't know the second bet is coming, when the first bet is offered, if you are defending yourself against sharp behaviour, you'll assume there are no black balls and pick A. The sharp operator will, of course, have placed 60 black balls in the urn, expecting you to pick A. Then when the second offer is posed, you can at most win $100 total. Defending yourself against sharp thinking you'll pick D (by now realizing that you were tricked into not taken the best choice on the first bet). In general, you'll always suspect that the person making the offer is, at that step in the game, one step ahead of the chooser, and you'll make the picks viewed as most immune from this disadvantage.
Additionally, for mathematical rigour, in the case where the second offer is made after a choice is taken on the first offer, whether the second offer posed can be influenced by either the choice/selected ball from the the first offer, otherwise explanations invoking an analysis of the psychology of the person posing the offers is insufficiently posed.
Finally, for completeness is really ought to be stated whether the chooser is left wondering whether there might possibly be a third offer on the same urn.
In the case where you are told that there are two bets only, and you get to make both your selections in tandem, a rational analysis remains subject to your utility function for the different pay-offs, which can reasonably be linear, concave, or convex depending on your personal circumstance (consider The Gift of the Magi by O. Henry). This does not necessarily have anything to do with risk aversion. — MaxEnt 19:17, 6 March 2015 (UTC)
Hey, I'm currently working - together with a fellow student - on the German version of the Article for a seminar. (Our current Version can be found
here) For this I created some
graphics which could be used to vizualize the 2 Versions of the Ellsberg Paradox. If I have time at the end of the semester I could also add some of the stuff we have written for the German Wiki here.
Another thing: The section "Possible explanations" is pretty much taken from Lima Filho, Roberto IRL (July 2, 2009). "Rationality Intertwined: Classical vs Institutional View" Available at SSRN 2389751: pp. 5–6. doi:10.2139/ssrn.2389751. Thats why I added a citation at the End of the section, but basically it covers the whole section (and also I'm not sure if the Ben-Haim, Yakov (2006) citation there is appropriate, the original paper doesn't have it).
best --
Daimpi (
talk)
19:29, 28 June 2015 (UTC)
The experimentally observed choices have a rather mundane explanation — provided the utility function is concave, and the utility function is applied to expectations. It is easy to see in a simplified version of the game:
In the first case the utility of the expected gain is U($100), in the second it is either U($50), or U($150); here U is the utility function. With concave U (as usually assumed) the first choice is better.
(This is OR, but as far as I can see, this is more or less identical to formula `(1)` in this result of quick googling. -- Ilya-zz ( talk) 08:11, 3 April 2021 (UTC)
I am a university student completing a Wikipedia editing assignment (Wikipedia Course Link for reference: https://outreachdashboard.wmflabs.org/courses/UQ/ECON3430_2021_(Semester_One_2021)) and I have made the most recent edit to this Wiki Page (26/4/21: 3:56pm). I have made some subtle changes to the wording of the earlier portion of the page and extended slightly upon the 'Decisions under uncertainty aversion section' as well as added an image and Academic Paper section towards the bottom.
The markers viewing the work will review the changes I have made and grade me, I am asking everyone if they could please withhold from making further edits until the 10/5/21 to allow ample time for the tutors to see my version of the work and grade me.
Regards. — Preceding unsigned comment added by WHn457 ( talk • contribs) 06:13, 26 April 2021 (UTC)
This article was flagged for tone in December 2010 with this edit, though the OP has given no indication what the problem is. Anyway, it looks OK to me, so I have removed it. If anyone feels the tone is still an issue, they should replace the template, and say what (in their opinion) is wrong with it. I trust this is OK with everyone. Moonraker12 ( talk) 15:32, 27 July 2022 (UTC)
The final para contains two self-contradictory statements. "The work was made public in 2001, some 40 years after being published" and "The book is considered a highly-influential paper". To be made public means the same as to be published. A book can't normally be considered a paper. I don't know what the writer is trying to say here. Could someone who does know, suggest a more sensible way of formulating it? Andy Denis 12:43, 17 June 2023 (UTC) Andy Denis 12:43, 17 June 2023 (UTC) — Preceding unsigned comment added by Andy Denis ( talk • contribs)
This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||
|
Thanks for your addition to the entry on the Ellsberg paradox. I must say I don't understand what you are getting at. Would you might modifying the entry to be a bit more clear? I am also worried that the explanation may mislead readers. The problem is not that individuals are choosing an option that has a worse expected utility for them, but rather that there is no utility function that can account for their behavior without including some sort of disutility for ambiguity (or someting). Anyway, thanks again for your interest! best, -- Kzollman 07:38, Jun 22, 2005 (UTC)
dear kevin, thx for your note. sorry, i haven't made myself clear - would you help me to clarify it? i would define 'mistrust to a stranger' as utility function - probably one could also interpret it as 'disutility for ambiguity', but i was trying to underscore that individuals behave reasonably when choosing to trust a relative and distrust a stranger. this seems to be the case, when we have mental reasoning that Y balls are less then 50% in first gamble and more then 50% in the second. the only constant i see, is a disbelief in possibility to win all, like in case with sister and brother. probably, i am defining 'utility' too broad, but then a question arises - should psychological terministic screens be used when one is dealing with economic terms? best twice, - unmet 22:36 jun22 2005
I seriously doubt that being 'cheated' is the most likely explanation of the paradox. In simpler studies of this type, when forced to choose between a sure thing and a bet, the expected value of the bet must usually be significantly higher before the person will choose it. This is usually explained as individuals just being averse to risk. In this case, taking the less known option gives a 50% chance of a worse option, for no greater pay, so risk aversion completely explains the participants behavior without needing to invoke the idea that the dealer is trying to cheat them. -- Kevin Saff 21:22, 14 July 2005 (UTC)
I have removed "The mistake the subjects are making..." In fact, the entire last section is speculative. Can we cite some sources on this? For instance, was this the explanation given by Ellsberg? If so, can we cite it as such? Fool 14:39, 15 August 2005 (UTC)
I think the probable solution is that many subjects misinterpret the scenario, just as I originally did! (I was in the process of writing a refutation, when I realised my mistake!) If subject assumes that they will be repeating the same gamble, using the same bag of balls each time (with balls being replaced before next draw), then, although A and B have same initial expected payout, A is less risky that B; similarly C and D have equal expectations, but D is less risky than C (the less risky gambles having the a fixed expection, whatever mixture of balls are in the bag - within defined constraints). In this case, rational people having any positive risk aversion prefer the lower risks; those with negative risk aversion prefer the risky ones.
Considering a single gamble is a bit artificial and "everyone" knows that probabilities only apply to averages of many runs of the "same" event. Even so, why should one assume that the "same" event implies using the same bag of balls each time (with chosen ball returned to bag), rather than a new bag of balls (or totally reconstituted contents) each time (albeit satisfying the specified ball distributions each time)? The analysis of the latter is the same as the one presented in the live page, but the former is what I initially assumed, perhaps because, subconciously, I knew it would be simpler to do in reality.
Most people probably do not choose their risk aversion rationally, especially in a hypothetical problem, but there is a rational attitude to risk, depending on the utility of the payouts to that person. With the above misinterpretted scenario, if, say, a below-average payout had no utility, then the low risk gambles are preferable; but if, say, only above-average payouts have any utility, then high risk gambles are preferable. (Strictly speaking, it may be rational to choose lower expections and various risks (variability) of monetary payout; but in terms of utility function, it is always rational to maximise utility, and risk (variability) of utility is irrelevent (personal preference being subsumed in the personal utility function).
John Newbury 22:36, 22 August 2005 (UTC)
The 'psychological explanation' that has been cut from this page resembles a simple (perhaps too simple) version of the drama-theoretic explanation of ambiguity aversion. I've added a link to this explanation. (I've also corrected the number of red balls from 20 back to 30. 30 is what it should be, without doubt.)
The drama-theoretic explanation invokes the 'personalization hypothesis'. This says that humans interacting with an entity in a signficant way will consciously or subconsciously personalize that entity -- ie, think of it as another party in an ongoing dramatic interaction. Luck (as in 'Luck be a lady') or death (as in 'Death has come to take me') are almost inevitably personalized in this way. Now what an experimenter presents as an ambiguity will, according to this hypothesis, be experienced as something that's chosen by the (personalized) experimental set-up. An assurance that the set-up will make an impartial, disinterested choice may be distrusted. The resulting 'trust dilemma' is reduced or eliminated by choosing an alternative that requires less trust to be placed in the experimental set-up. Nhoward 19:17, 23 August 2005 (UTC)
Two things. First, chosing the pairs A and D, or B and C respectively insures that you get $100 with no risk. I find that attractive and not at all irrational. Second, this is the Nash equilibrium (ie, the minimum amount you are sure to gain) for the game. -- KarlHallowell 19:46, 23 August 2005 (UTC)
I have removed the following text from the article:
The reference mentioned there is a posting on a web form by an unsigned author. It does not meet the requirements of notability. In addition, the explanation from the web forum is almost exactly the same as the explanation that is discussed above (and the consensus decision was that it should be removed). --best, kevin ··· Kzollman | Talk··· 16:37, September 2, 2005 (UTC)
I don't think this model fits reality. Many people love gambling or the lottery, even when they know that their expectation value is a loss; and there would be many more if they weren't held back by ethical/religious concerns. How can this model overlook this simple fact? Common Man 06:38, 28 October 2005 (UTC)
It's for a different draw on the same urn. -Dan 13:52, 17 January 2006 (UTC)
I don’t disagree with the general thrust of this article, but the “if and only if” argument above does not take into account the case when the two options are thought to have the same likelihood. At the risk of a double negative, we could say that Gamble A will be preferred if it is thought that a red ball is not less likely than a black ball. Even this way of expressing the rationale does not adequately take into account how a preference is made between two “equal” choices, measured by utility. Consequently, the mathematical demonstration should have greater-than-or-equal-to signs in the inequalities below. The conclusion is that B = 1/3, rather than a contradiction.
It should also be pointed out that there is consistency in the observed behaviour since the odds are known precisely for Gamble A and D, but have an unknown amount of variability for B and C. The inference that might be drawn from the observed results is that several factors are considered when making decisions, especially when there is little difference between one of the major criteria. A link to Portfolio theory might be a useful addition to this post, since Portfolio theory considers how one should consider the variance as well as the mean when making decisions.
I leave these suggestions to somebody more familiar with the topic to put into context.
In the first section, it says "it follows that you will prefer Gamble A to Gamble B if, and only if, you believe that drawing a red ball is more likely than drawing a black ball", or R > B. However, in the "Mathematical demonstration" section, ">=" is used instead of ">". Is this an inconsitency or am I wrong here? Rbarreira 13:18, 5 February 2007 (UTC)
In the mathematical demonstration, I don't think it's necessary to require U(100) preferred to U(0). The paradox holds as long as we consider that both U(100) and U(0) are constant throughout.
Running through the maths you can easily show that R[U(100)+U(0)] > B[U(100)+U(0)] in the first case and... B[U(100)-U(0)] > R[U(100)-U(0)] in the second.
Hence, it's not actually necessary to make a judgement over preferences beyond that A is preferred to B and D to C. Jamesshaw 15:30, 11 May 2007 (UTC)
Right: just noticed this is stated a little later, but still not sure why necessary to make this assumption in this section. Jamesshaw 15:32, 11 May 2007 (UTC)
Gambles A and D hedge each other. That would be the trivial solution to why they are both preferred. There could be 0 black balls, and yet there could also be 60. We would like to cover both possibilities. -- 76.217.95.43 ( talk) 02:47, 24 February 2008 (UTC)
Yes, picking both A and D is a hedge. But that doesn't explain why surveys show that most people actually do pick A and D.
Situation 1: You are allowed to pick either A or B (or neither) on one draw, and also allowed to pick either C or D (or neither) on the *same* draw.
Picking A and D is hedge -- if you pick A and D, then you get a guaranteed $100 no matter which color ball is pulled.
However, picking B and C is also a hedge -- if you pick B and C, you also get a guaranteed $100 no matter which color ball is pulled.
Situation 2: You are allowed to pick either A or B (or neither) on one draw. Then the ball is replaced, the urn shaken, and you are allowed to pick either C or D (or neither) on the *next* draw.
There's a lot more uncertainty here, but the expected return is the same as before: Picking A and D is a hedge, with an expected $100 for the 2 picks of Situation 2. Picking B and C is also a hedge, with an expected $100 for the 2 picks of Situation 2.
So why do surveys show that most people pick A and D, rather than B and C? -- 68.0.124.33 ( talk) 15:57, 20 August 2008 (UTC)
Look up at my section 'Why isn't it just comparing risk to knowledge?' and it should be obvious why people choose what they do. —Preceding unsigned comment added by 69.205.97.220 ( talk) 10:24, 2 March 2009 (UTC)
Previous discussion deleted and comments revised by OP
|
---|
==I think the math and economics are both wrong here.==
First, the math. When the article's author solves for the expected utility from a black ball (bet B), he violates Jensen's inequality. For bet B, the individual faces an unknown distribution between 0 and 2/3. With no information on the distribution, a rational individual will use the zero information probability distribution --> the uniform distribution (you can disagree with this if you want, but the result holds for any probability distribution, so there's no point). So with a discrete number of balls, the individual has a utility function over a discrete uniform distribution, or:
When the author writes utility as:
He's taking the expectation from outside the utility function and moving it to the inside (recall that the sum here is an expectation). But you can't do this. E(U(x)) is strictly less than U(E(x)) when individuals are risk averse. Now the economics -- individuals choosing B do not violate the expected utility hypothesis. They are rational risk averse dudes, preferring a tight distribution of payoffs to a fat one. More importantly, Ellsberg does not seem to argue this in his article. He argues that preferences do not reveal unique probability comparisons, as some other dude (Savage) had maintained. He's right. But I see this more as implying that the expected utility hypothesis does not imply Savage's axioms hold when comparing fixed and random payoffs. Regardless of Ellsberg's intent, the choice of B is perfectly consistent with risk-averse choices in the expected utility hypothesis. — Preceding unsigned comment added by 128.151.203.76 ( talk • contribs) 21:32, 3 July 2008 (UTC)
|
NOTE: Revised from first effort. Thanks for the comment.
If an individual faces a choice between U(x)=5 with certainty and a bet where there's a 0.5 chance that U(x) = 6 and a 0.5 chance that U(x) = 4, his expected utilities are equal. But if he's making many such bets in sequence, then he doesn't really have to pick one or the other. He can use a mixed strategy over time as though he were constructing weighted portfolios in each alternative. And it's possible that this mixed strategy will have higher utility than always picking one or the other.
Let's say a value maximizing investor has the option of allocating between two bets. Call them A and B. A delivers $100 with probability 0.5. B has two cases. Case 1 has 50% probability, and pays $100 40% of the time. Case 2 has 50% probability, and pays $100 60% of the time.
Say that the weights on A and B sum to 1. Call the weight on A = A, so the weight on B = 1-A. The investor's objective function looks like:
Which reduces to:
Taking the derivative with respect to A and setting = 0 to maximize, you get:
The optimal value of A will depend on the choice of U, but I'm pretty sure (though haven't proven) it will almost always be greater than 0.5 for risk averse investors. For illustration, suppose U(x) = sqrt(x). Then U'(x) = 1/(2*sqrt(x)). Inserting this function, multiplying both sides by 2, and taking the reciprocal of both sides gives:
Divide by 10 on each side to get:
Square to get:
Which gives A = 0.812744 > 0.5. So A gets a higher weight.
This doesn't necessarily imply that if you're forced to choose between A or B, you choose A. However, think of the experiment as being a repeated game and not a 1-off thing. Rational risk-averse individuals would choose A more often and B less often as if they were putting together a weighted portfolio.
I have to stop now. Comments/further development welcome. — Preceding unsigned comment added by 128.151.203.76 ( talk • contribs) 23:19, 4 July 2008 (UTC)
This is wrong. Look at the second equation.
You can write this as:
Taking the derivative and setting =0 will always yield A = 0.5. —Preceding unsigned comment added by 74.74.158.35 ( talk) 23:48, 17 July 2008 (UTC)
I kind of agree, but would state it this way: It makes absolutely no sense to analyze the bets under the assumption the gambler will makes a guess about the relative probabilities of yellow vs black balls, because no information is given. A proper analysis of the probability would include all possible ratios of black to yellow balls as part of the set of outcomes. From this point of view, although I'm being informal here, the probability of winning bet A is 1/3, while the probability of winning bet B is between 0 and 2/3 .. 0 if there are no black balls, and 2/3 if there are 60 black balls. The gambler prefers bet A if he prefers a bet with known probabilities, similarly the gambler prefers bet D if he or she prefers a bet with known probabilities. So it's not surprising surveys show most prefer bets A and D! Another aspect of this problem is that it's not well-defined as a problem in probability, because there is no information given about the distribution of possible ratios of yellow to black balls. It's like asking whether someone would prefer a bet of heads in which a fair coin is tossed, or a bet of heads in which a coin of UNKNOWN fairness is tossed .. in the first case, you know your odds. In the second case you have no information about the odds. (HOWEVER because you are told what your bet must be -- "heads" -- you might suspect the party proposing the bet has stacked the odds against you.) —Preceding unsigned comment added by 67.8.154.37 ( talk) 14:06, 3 March 2011 (UTC)
I think the user strategy is exactly risk aversion.
By choosing strategy (a) he is guaranteed to have 1/3 probability of winning, if he chooses (b) he might have 2/3 probability or he might have 0, depending on the game host.
Similarly choosing (d) he is guaranteed to have probability 2/3 of winning, whereas by choosing (c) he might have probability of 1, or 1/3 of winning. In both cases the probability of winning in scenarios (b) and (c) are determined by the game host, and users try to protect themselves against adversarial behavior... — Preceding unsigned comment added by 76.124.186.208 ( talk • contribs) 9 February 2009 (UTC)
I'm looking at the "paradox" and it seems obvious that this shouldn't violate any numerical reasoning. My argument: In the first scenario, A is a sured number (30) where B is the 'gamble', but in scenario two, it's the other way around, A is the gamble (30+?), but B is the sured number (60). How could someone interpret this as having a dissonance or not following a simple rule? I think if someone is more risk-taker than not, it's a B-C combo, but if they're not a risk taker, it's an A-D combo. Anyone care to help where a "paradox" comes in? — Preceding unsigned comment added by 69.205.97.220 ( talk • contribs) 10:22, 2 March 2009 (UTC)
"According to utility theory, if you have a preference for A then you should also have a preference for C." Citation needed. Utility theory predicts that goods distributions with higher utility are preferred. Because of the anti-correlation between the number of black and yellow balls, adding wins for yellow balls reduces the variance of the outcome for game B=>D and increases it for A=>C, while keeping expected value(A)=E(B) and E(C)=E(D). The results are fully explained by supposing that when expected value is equal, people prefer distributions with small variance to distributions with high variance. For instance, you predict the experimental results if you say utility(distro) = mean value / (stdv+1), which is a perfectly fine utility function. Philgoetz ( talk) 18:13, 12 February 2015 (UTC)
People are stupid and don't understand math. It's that simple. It's what keeps vegas going.
Or maybe people really do prefer a known bad deal to a deal that probably will be better, but maybe not.
I wonder if the results might vary by culture. —Preceding unsigned comment added by Paul Murray ( talk • contribs) 23:17, 11 March 2009 (UTC)
Chosing A and D is the most rational choice. With the aproach suggested in the article one could be easily fooled into poor choice by using 30 red and 60 black balls. When presented with the first problem, you would logically choose A. Clearly the approach used in the article leads to a poor choice, as chosing C in this situation leads to a lower chance of winning. It's irrational to assume that the choice you made in the first gamble was correct.-- Ancient Anomaly ( talk) 15:19, 15 December 2010 (UTC)
How is this a "paradox"? People don't like uncertainty. If someone's model fails to predict this aspect of human behavior, the model is flawed, but that doesn't make this a paradox. Please update the article to explain more clearly where the "paradox" lies, if there is one. 129.219.155.89 ( talk) 18:35, 14 January 2014 (UTC)
The guaranteed probability of getting a red ball is 30/90. The guaranteed probability of getting a non-red ball is 60/90. The probability of getting either yellow or black is undefined.
In A the probability of getting $100 is 30/90, in D it is 60/90. B and C is just trying your luck. — Preceding unsigned comment added by 94.197.127.135 ( talk) 09:01, 23 June 2013 (UTC)
"So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D."
Why does this follow? Based on this false assumption, you will come to the "paradox".
To understand the "paradox" it is important to understand the assumed conclusion. — Preceding unsigned comment added by 94.197.127.135 ( talk) 09:09, 23 June 2013 (UTC)
I would propose that the illustration of the paradox be changed to the one described here: http://ocw.mit.edu/courses/economics/14-123-microeconomic-theory-iii-spring-2010/lecture-notes/MIT14_123S10_notes06.pdf — Preceding unsigned comment added by Bquast ( talk • contribs) 09:03, 3 October 2013 (UTC)
Are you told both bets at the same time? If not, do you complete the first draw before being informed of the second bet? If you are told sequentially, you might choose A out of risk aversion, and then make your choice on the second bet influence by framing effects created by the first decision (Kahneman's book discusses this).
Furthermore, if you are told the bets sequentially, there remains the hypothesis that the person giving you the wagers is behaving sharply (i.e. trying to psych you out). Since you don't know the second bet is coming, when the first bet is offered, if you are defending yourself against sharp behaviour, you'll assume there are no black balls and pick A. The sharp operator will, of course, have placed 60 black balls in the urn, expecting you to pick A. Then when the second offer is posed, you can at most win $100 total. Defending yourself against sharp thinking you'll pick D (by now realizing that you were tricked into not taken the best choice on the first bet). In general, you'll always suspect that the person making the offer is, at that step in the game, one step ahead of the chooser, and you'll make the picks viewed as most immune from this disadvantage.
Additionally, for mathematical rigour, in the case where the second offer is made after a choice is taken on the first offer, whether the second offer posed can be influenced by either the choice/selected ball from the the first offer, otherwise explanations invoking an analysis of the psychology of the person posing the offers is insufficiently posed.
Finally, for completeness is really ought to be stated whether the chooser is left wondering whether there might possibly be a third offer on the same urn.
In the case where you are told that there are two bets only, and you get to make both your selections in tandem, a rational analysis remains subject to your utility function for the different pay-offs, which can reasonably be linear, concave, or convex depending on your personal circumstance (consider The Gift of the Magi by O. Henry). This does not necessarily have anything to do with risk aversion. — MaxEnt 19:17, 6 March 2015 (UTC)
Hey, I'm currently working - together with a fellow student - on the German version of the Article for a seminar. (Our current Version can be found
here) For this I created some
graphics which could be used to vizualize the 2 Versions of the Ellsberg Paradox. If I have time at the end of the semester I could also add some of the stuff we have written for the German Wiki here.
Another thing: The section "Possible explanations" is pretty much taken from Lima Filho, Roberto IRL (July 2, 2009). "Rationality Intertwined: Classical vs Institutional View" Available at SSRN 2389751: pp. 5–6. doi:10.2139/ssrn.2389751. Thats why I added a citation at the End of the section, but basically it covers the whole section (and also I'm not sure if the Ben-Haim, Yakov (2006) citation there is appropriate, the original paper doesn't have it).
best --
Daimpi (
talk)
19:29, 28 June 2015 (UTC)
The experimentally observed choices have a rather mundane explanation — provided the utility function is concave, and the utility function is applied to expectations. It is easy to see in a simplified version of the game:
In the first case the utility of the expected gain is U($100), in the second it is either U($50), or U($150); here U is the utility function. With concave U (as usually assumed) the first choice is better.
(This is OR, but as far as I can see, this is more or less identical to formula `(1)` in this result of quick googling. -- Ilya-zz ( talk) 08:11, 3 April 2021 (UTC)
I am a university student completing a Wikipedia editing assignment (Wikipedia Course Link for reference: https://outreachdashboard.wmflabs.org/courses/UQ/ECON3430_2021_(Semester_One_2021)) and I have made the most recent edit to this Wiki Page (26/4/21: 3:56pm). I have made some subtle changes to the wording of the earlier portion of the page and extended slightly upon the 'Decisions under uncertainty aversion section' as well as added an image and Academic Paper section towards the bottom.
The markers viewing the work will review the changes I have made and grade me, I am asking everyone if they could please withhold from making further edits until the 10/5/21 to allow ample time for the tutors to see my version of the work and grade me.
Regards. — Preceding unsigned comment added by WHn457 ( talk • contribs) 06:13, 26 April 2021 (UTC)
This article was flagged for tone in December 2010 with this edit, though the OP has given no indication what the problem is. Anyway, it looks OK to me, so I have removed it. If anyone feels the tone is still an issue, they should replace the template, and say what (in their opinion) is wrong with it. I trust this is OK with everyone. Moonraker12 ( talk) 15:32, 27 July 2022 (UTC)
The final para contains two self-contradictory statements. "The work was made public in 2001, some 40 years after being published" and "The book is considered a highly-influential paper". To be made public means the same as to be published. A book can't normally be considered a paper. I don't know what the writer is trying to say here. Could someone who does know, suggest a more sensible way of formulating it? Andy Denis 12:43, 17 June 2023 (UTC) Andy Denis 12:43, 17 June 2023 (UTC) — Preceding unsigned comment added by Andy Denis ( talk • contribs)