Talk:Ludic fallacy
This article was nominated for deletion on 24 July 2014 (UTC). The result of the discussion was keep. |
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||
|
Early discussion
editNotability.
References for the article etc. will be here in a few days.
- I added references. More of their way. IdeasLover 07:16, 13 January 2007 (UTC)
"Uncertainty of the nerd"? Where's that coming from? Is that actually Taleb's words as well? --Khazar 13:48, 13 May 2007 (UTC)
- Taleb's wording indeed. YechezkelZilber 02:56, 14 May 2007 (UTC)
What the heck is GIF? Frankly I dont believe this belongs here. Does this entry meet the criteria for a fallacy? It seems too vague as it is currently worded. It is self evident that unknown factors cannot influence utility calculations. This has nothing to do with statistical theory per se.
- GIF is acronym for "Great Intellectual Fraud". see in the text.
- The issue has nothing to do with the theoretical side of probability theory. It is about the practical - too common - use of statistical theory and its application where it is irelevant (or very inexact and practically off the mark). Your claim about the unknown and utiity calculation is exactly about this point - one cannot do utilities etc. based on the fantrasy that he knows everything. Utility stuff is good when used with caution or for theoretical purposes. But in practice one needs to know actual utilities not twicely differentiable Pratt-Arrow curves! (I suspect I am not clear enough. So I will clarify if you think I did not made my point)YechezkelZilber 22:54, 3 June 2007 (UTC)
The definition here needs to be more clearly explained. Although Taleb has a definition on his Black Swan Glossary and in his book, you need to read the whole chapter (or at least the first 4 pages of it) to simply get the definition.
In Taleb's words, ludic fallacy (or "uncertainty of the nerd") is the "manifestation of the Platonic fallacy in the study of uncertainty; basing studies of chance on the narrow world of games and dice. A platonic randomness has an additional layer of uncertainty concerning the rules of the game of real life.
The second paragraph has quotes from NNT's book and glossary, but that doesn't help explain the idea. "The uncertainty of the nerd" is meaningless unless you give the narrative of Dr. John and Fat Tony. The "ludic fallacy" is simply using our current culture's idea of games and gambling as equivalent to randomness as seen in real life. We are guilty of the ludic fallacy when we equate the chances of an event that has occurred to us as equivalent to that of flipping a coin. This is simply because we know the odds of getting a heads or tails (50 percent chance), whereas we can't begin to compute the odds of us missing the train (all we REALLY KNOW is that the odds change depending on other events, i.e. strikes, earthquakes, etc. This leads to Popper, falsification, and the idea that it's easier to say what I don't know than what I do). When we compare the train event to flipping a coin what we are saying is there was only a 50% chance of missing the train. We fall for this fallacy because "we like known schemas and well organized knowledge - to the point of blindness to reality."
I've re-factored the example section in the vein of the train example, as an example should be more straightforward for the viewer who hasn't read the book.--Herda050 06:49, 6 September 2007 (UTC)
- I am not sure the example is appropriate. Anyone saying "it was a coin flip" for the train story. does not mean the technical 50/50 gamble. An example where people take the technical stuff seriousely would be more appropriate (maybe the one I gave originally). One may look over in the book and take the example from there (Should be chapter 9 or so "the ludic falacy") YechezkelZilber 02:04, 7 September 2007 (UTC)
- YechezkelZilber the autodidact? Sorry to blatantly rewrite your example, as there was nothing incorrect about it. I thought it might be too dense for someone who hasn't read the book though. I know the example I provide is oversimplified (which in itself is a problem), but the article should be geared toward people who haven't read the book. Don't you think he was indicting our too human instinct to link metaphorically the probabilities we see in gambling with everyday instances that occur in life ("we use the example of games, which probability theory was successful at tracking, and claim this is a general case")? I'm not sure who you're referring to by "people who take the technical stuff seriously". If you mean people who were trained in statistics, I didn't get that he was indicting just those people. He says "we automatically, spontaneously associate chance with these platonified games." I read that he was indicting the human race ("experts" included) as falling for the ludic fallacy.
- I have the US printing of the book, and he actually doesn't give a clear example in that chapter. He uses the rhetorical style he employs throughout the book, narrative, then tells the story of his attendance at a brainstorming session at the Las Vegas hotel. The story he tells illustrates the Ludic Fallacy, but is too long for the article and copyrighted.--Herda050 06:23, 7 September 2007 (UTC)
- I looked at the other fallacy pages and re-added your example as number 2. Guess I "focused" to much to contemplate that having two examples would be fine.--Herda050 06:44, 7 September 2007 (UTC)
- It is me. No need to apologize, I enjoy being equal. Thanks for re-using my text, hope it is actaully useful. I admit the heaviness of the example. The UK/US versions of the book are the same. Nice editing. YechezkelZilber 15:19, 7 September 2007 (UTC)
Sorry to be posting this remark anonymously, but I don't yet have a Wikipedia account. The entry makes, I think, I strange and wrong argument. It remarks,
"The young job seeker here forgets that real life is more complex than the theory of statistics. Even with a low probability of success, a really good job may be worth taking the interview. Will he enjoy the process of the interview? What strategic value might he win or lose in the process?"
But... there's nothing in those considerations that are outside the realm of basic expected utility theory and the model of decision analysis that this example is given as a way to refute. In particular, the observation that the low chance of success ought not deter a job seeker if the reward is great enough is the bread and butter of expected utility calculations (e.g. drawing for an inside in poker is a long shot and usually a bad decision, but it might be worthwhile if the pot is large enough in comparison to the bet required to stay in a hand--the expected utility is positive, even if the chance of success is low). - John (a prof in the social sciences) —Preceding unsigned comment added by 68.49.250.185 (talk • contribs) 16:24, 8 November 2007
- Thanks for the comment. The idea was about complexity. Even when starting from the assumptions of expected utility, it is not simple to bail down the details. Second order effects (joy of the interview. Non-linear effects of the process etc.) are making the picture even more fuzzy. There is much less knowledge about the parameters than it feels at first glance. Points should be clarified in the article, tough. YechezkelZilber 15:00, 9 November 2007 (UTC)
- John to add to YechezkelZilber's point, the ludic fallacy highlights our human tendency to simplify the complexities of life. Specifically in the form of using games and gambling (dice, poker, etc.) as the metaphor. However, unlike in such games, the probabilities in life are dynamic and the rules constantly change. For poker to be an accurate metaphor, you would need to randomly alter the rules (adding cards to the deck, changing the order of hands, etc.). Although all those considerations (listed in the example) may lie within the realm of basic utility theory, real life provides results that are beyond our ability to consider and therefore can't be measured for their utility until after they occur.Herda050 00:50, 13 November 2007 (UTC)
The formulation of this "fallacy" is erroneous -- it's a pragmatic fallacy, not a logical fallacy. I'm not sure if the examples are poorly interpreted or recounted, but neither is a logical problem. In the first example, we are told to _assume_ that the coin is fair (e.g., 50/50 heads/tails) for the purposes of the thought experiment. To then, at the end of the experiment, call into question that premise is contrary to the very point of the thought experiment. If the premise is true, then the good doctor is right; if it's false, then we really have no way of evaluating the truth or falsity of either the doctor's or fat tony's statements. Regardless, I don't think anyone is really fooled by that sort of fallacy -- if a real-life coin DID come up heads 99 times in a row, I would be suspicious of its fairness, but is there any idiot who wouldn't?
In the second example, it's another bit of argumentative trickery to try and persuade us that we can't fit the model to life. Yes, there are unknowns and complexities in real life. That has nothing to do with a model and its applicability to real life. A good portion of modeling theory is dedicated to just this problem: estimating the scope and effect of uncertain circumstances that may affect the model. Simply because his model might suck is no reason to doubt the applicability of a model in general. Similarly, we have well-understood models for things like, say, paths of projectiles in normal earth conditions. But oh no! We neglect to take into account complexities of the gravitational effects of the moon, or a nearby comet! Regardless, any competent physicist (modeler) could give you a reasonably accurate idea of where your projectile will land, given some basic initial conditions and measurements. And in game theory, it's the same situation -- there are always going to be factors that affect probabilities chaotically -- the bit of grease on the Ace of Spades, or a slight ridge in your coin-flipping thumbnail. We just happen to be reasonably comfortable with making guesses based on inductive models.
And, of course, it is induction that is at the heart of this argument (as the author undoubtedly recognizes, with a title like "Black Swan"). If the Ludic fallacy is a false belief that games or models apply to real life, and our models are merely inductive representations of the real thing, then the exact same critique can be applied to any inductive (e.g., synthetic) knowledge. Unfortunately for this "fallacy", David Hume discovered that some time ago.
76.19.65.187 (talk) 22:50, 8 April 2008 (UTC)
- I haven't read the book yet, nor do I know very much about philosophy, so I could be completely off here. However, the first example seems to completely ignore Bayesian statistics. We are told to assume that the coin is fair; that is, we can assume we have a prior distribution of 0.5 heads and 0.5 tails. Laplace's rule of succession offers a mathematically rigorous way to update that prior based on the outcomes of the 99 flips; it turns out that the posterior probability of the hundredth flip coming up heads will be 100/101. The reason Dr. John doesn't agree with Fat Tony is not because he is committing the ludic fallacy - it's because he should be using a better model. Stochastic (talk) 03:53, 1 June 2008 (UTC)
- Whoops, I mean we can assume a uniform prior for the probability of getting heads. Stochastic (talk) 16:53, 1 June 2008 (UTC)
- I don't have the book to check, but I'm removing the part about assuming the coin is fair for the purpose of the thought experiment. Otherwise the whole thing doesn't make sense. Eric Kvaalen (talk) 07:06, 25 October 2008 (UTC)
The phrasing of the Fat Tony/Dr John example is not as clear as it is in the book. It still looks here as if Fat Tony is making a mathematical judgement (99-1) or figuring out the problem logically using a better model. That misses the point. What Taleb is trying to say is that Dr John, like so many clever people (including me, alas) have been trained throughout our lives in mathematical/engineering disciplines and prefer to reduce problems to the comfortable simplicities of logic. Hence when Dr John is told the premise of a fair coin toss, he accepts it, as he would in an exam hall, and produces the pat answer 50-50. Gold star for him. Fat Tony is a hustler. When he sees a coin landing heads 99 times, he knows the game is crooked - he doesn't accept the premise - someone's lying, or has been lied to. Dr John is the Nerd - he has a closed-minded, academic mind-set. Another point that Taleb makes is that in human life, occurences can happen (especially in the social or financial worlds) that cannot be predicted, nor their effect calculated, and that if we try to make future predictions based on statistic models, we can be wildly wrong. Think for example the effects of the invention of the internet, or penicillin, or the Russian loan default. The example given above of a projectile given above is not appropriate in this human, social world - think instead of calculating the path of a projectile when a gravity well the size of Jupiter may (or may not) suddenly appear in front of you. --Cavort (talk) 15:35, 25 August 2008 (UTC)
Update
editHi I've had a go at improving this page taking some information from the book to help.
I relise that the idea of using the Red Black question is not ideal as the idea of the ludic fallacy is that casio logic doesn't effect real life, but though that it would help explain the idea of the coin being loaded —Preceding unsigned comment added by Horrisgoesskiing (talk • contribs) 11:50, 14 March 2008 (UTC)
==
The text currently contains the following:
"By utilizing your colleague's analogy going forward, you don't understand that there could be a far greater or far lesser chance of making the train, but you think you know what your chances of making the train are, and in reality you now have a far greater or lesser chance of getting home on time. The future unknown risks involve the consequences of consistently getting home later than expected."
There's no referent present for this "colleague's analogy", etc. I don't know the "train story" (which this seems to relate to) so I can't fix things.
Apologies for the anon post. —Preceding unsigned comment added by 75.28.162.152 (talk) 03:25, 3 April 2008 (UTC)
I too became confused about "colleague's analogy" and trains. I don't know the "train story" so I can't fix this orphaned reference. —Preceding unsigned comment added by 67.102.38.38 (talk) 16:44, 23 May 2008 (UTC)
Hi. I agree with the other commenters. The 2nd example is particularly erroneous. "Even with a low probability of success, a really good job may be worth the effort of taking the interview." Supposedly according to the example, the interviewee understands statistics and utility. But obviously he doesn't understand utility if he doesn't understand a really good job is worth a low probability effort!!! That's the entire point of utilty!!! —Preceding unsigned comment added by Hatch113 (talk • contribs) 02:35, 3 March 2009 (UTC)
I have removed the Link to the Peter Taylor slideshow because it is broken--it directs to a page for the Oxford center of which he is a member, but searching their site, I find no powerpoint on the ludic fallacy. I have substituted instead a blog by Taylor on the ludic fallacy. —Preceding unsigned comment added by 74.173.40.245 (talk) 22:13, 20 August 2009 (UTC) Isn't this the same as the Gambler's Fallacy? 86.45.151.91 (talk) 23:12, 21 September 2009 (UTC)
Bad examples (1 and 2)
editFor example, in two: "The young job seeker here forgets that real life is more complex than the theory of statistics.". The theory of statistics has nothing to say about the complexity of life. It would be more accurate to say "The young job seeker here forgets that real life has more variables than the small set he has chosen to estimate." Statistics is perfectly able to produce accurate results if given an accurate model. —Preceding unsigned comment added by 72.201.248.165 (talk) 22:03, 14 January 2010 (UTC)
- problem is to get to an accurate model. That is the whole ludic fallacy, that people assume errornously simplified models. Yechezkel Zilber (talk) 23:03, 14 January 2010 (UTC)
- yes, well, as there was no move to remove the blame from statistics itself, i have made the suggested modification. 72.201.248.165 (talk) 19:42, 16 January 2010 (UTC)
example 3 confusing
editHow were the extreme moves of black Thursday different, for the purposes of the model, from the market breakdown after the tsunami, except the tsunami caused even more extreme moves? It is a model of prices! Did the markets close?
Taleb often criticizes the CAPM for being unable to deal with extreme events. Such events are possible in the model, but should be much more rare than is actually observed. This is due to the assumption that the price movements are normally distributed. Mandelbrot thinks he can patch things up (see his "Misbehavior of Markets"), in which case it would not be an instance of the ludic fallacy but just a bad model. — Preceding unsigned comment added by 98.150.235.107 (talk) 08:51, 5 March 2012 (UTC)
"Fat Tony" example is misrepresented
editThe current version of the article words the thought experiment along the lines of, "assume a man comes up to you and tells you the coin is fair, and then it flips heads 99 times". In the actual book, it is worded as "assume the coin IS fair, and it comes up heads 99 times". These are very different thought experiments. The former iillustrates correct Bayesian reasoning, whereas the latter is nonsensical, as the fairness of the coin was given as a premise. In that case, if you are going to doubt that the coin is fair, why not instead doubt that the coin came up heads 99 times? They are both part of the premise. There should at least be a note in the article about this discrepency (sorry, but this particular passgage in the book really bugged me, and I do not think Taleb's lack of understanding should be "corrected" in this article). ReverendDave (talk) 11:05, 5 February 2010 (UTC)
- This is exactly the point. That the very idea of automatically and categorically taking the premise at face value is not smart. Taleb holds htat Fat TOny would have said "the coin is not fair" regardless of what the premise is. Because Fat Tony would not accept that you decide for him how to define the question. Fat Tony does not think in mere technical ways. Yechezkel Zilber (talk) 21:22, 5 February 2010 (UTC)
- But again, in the book, the 99 heads is just as much a part of the premise as the fairness of the coin is, as is the very existence of the coin. So Fat Tony might as well have said that he doesn't believe the coin came up heads 99 times. Or he might of just said the coin does not exist at all. The example given in the wiki article is much more valid, since the premise is that somebody claims the coin is fair. There should at least be a footnote noting Taleb's invalid example. ReverendDave (talk)
- We are now about sentencing etc. the idea is not in question anyway, just the phrasing and wording Yechezkel Zilber (talk) 22:37, 6 February 2010 (UTC)
- The idea is actually very different depending on how you word it. In any event, will change the wording of the example to more accurately reflect how it is worded in the book. ReverendDave (talk) 11:55, 7 February 2010 (UTC)
- Surely the problem is if the words of the third person aren't taken as axiomatic, the entire thing could just as much be a scam aimed at someone like Fat Tony. After claiming a fair coin has resulted in 99 heads, as you say most people would assume it isn't a fair coin and then the third person could get Fat Tony to bet at 10:1 odds the next result will be heads (which Fat Tony would now think is guaranteed winning bet, especially if the con-man sells the con well enough). In fact the coin is really fair, it was only the 99 heads claim that was the lie. --86.173.140.91 (talk) 18:50, 15 April 2010 (UTC)
Are we really leaving this example in place? The thought experiment proposed is insane and the editors above made good points about why. It's basically saying that reasoning from absurd premises is fallacious - which is essentially the opposite of the truth (it would be fallacy to ignore premises you don't like in a problem). 0x0077BE (talk) 23:43, 11 February 2014 (UTC)
Article needs to be significantly reworked (or deleted)
editThis article is a complete mess. Reading it, it's clear what he's criticizing, but not at all clear that this is fallacious. I understand why it is written that way, having read Taleb's books, but he honestly meanders in and out of coherence during the entire trilogy, and if we're going to include this article in Wikipedia, we should try and keep on the "coherent" side of things. As written, it appears that the "ludic fallacy" is actually an example of a continuum fallacy - some information is lost when modelling the real world, therefore modelling is never appropriate; this is an extreme heterodox position and we should be able to get a "criticism" section in here, if that's an accurate characterization of the point.
The "examples" are all just... bizarre. I get that they are taken directly from the book, but the "point" of the first one is that you should selectively ignore premises when reasoning and the other two aren't really illustrative examples, they are just statements about the perceived weaknesses of models. I suggest culling the entire section.
Finally, I think there need to be some reliable sources to substantiate that this term is used anywhere outside of Taleb's work and that it is a serious topic that merits its own article. If not, the article should be deleted or merged into an appropriate page (maybe Taleb's page or the page for one of his books). 0x0077BE (talk) 23:58, 11 February 2014 (UTC)
- "keep on the "coherent" side of things" With all due respect, but that is not our job. If the sources are confused, then so are we. If commentators share this opinion, great, let's cite them. If not, we have no sources to refer that claim to.
- "taken directly from the book" I don't see how we can ignore the very source this article is meant to represent.
- "substantiate that this term is used" A quick google gives 5k+ hits, 100 on Scholar. To lazy to do the legwork right now, so I tagged the article.
- Can somebody tell me what the Medin and Fodor works have to do with the ludic fallacy? If not, they should be dropped. Paradoctor (talk) 02:00, 12 February 2014 (UTC)
- Regarding the first point - the article on Time Cube is not an incoherent mess, even though the source material is. It is our job as editors to include things that are important and lead to better comprehension of the subject, not to blindly copy someone's assertions. If these are the only examples of the ludic fallacy that we can find, and they are unclear and incoherent, they shouldn't be included.
- Regarding the second point - this is the crux of why this article is, in my opinion, not notable. The article is purported to be about a general concept, but it's just a discussion of a theme in one of Taleb's books. It would be absurd to "stay true" to everything in On the Origin of Species in the article on Evolution just because that's the first significant introduction.
- Regarding the third item - I'm not seeing any substantive adoption of the term outside of reference to Taleb himself. Most of the google hits are book reviews or other discussions of the concepts of the book. The scholar hits seem similarly low-quality. I am, however, agnostic about it - if it's been legitimately adopted as a concept independent of Taleb's corpus, then someone should be able to find sources to substantiate this claim. I believe that was requested on this talk page 4 years ago, though, and no such sources have been forthcoming. 0x0077BE (talk) 02:17, 12 February 2014 (UTC)
- Proposal for the creation of a 'Taleb Fallacy', in which one assumes that the use of hypothetical situations is an accurate depiction of the logical processes by which humans operate. — Preceding unsigned comment added by 73.14.217.163 (talk) 01:56, 9 March 2016 (UTC)
Fiction presented as evidence
editIt is ludicrous to give fictional stories as evidence for a theory, even if you call doing so a "thought experiment". Allowing this would prove all sorts of crackpot ideas. It is even worse than proving things by anecdote. Even though they are copied out of a book, some warning or critique should be given. 92.3.60.236 (talk) 22:17, 21 October 2018 (UTC)
- These are not evidence, they are examples. I agree that even examples must come from real life. Otherwise who known whether this fallacy is real. Staszek Lem (talk) 18:23, 23 October 2018 (UTC)
Lede sentence of description §
editI removed the 'alleged' inserted here as I could find no support for that characterization. (That edit by 94.217.151.27 was otherwise reasonable, imo.) I've collected the below descriptions of the fallacy as fodder from which to construct a replacement for the lede sentence of the Description §, starting with Taleb's glossary entry in Antifragile, then rewording by a few others:
- "Mistaking the well-posed problems of mathematics and laboratory experiments for the ecologically complex real world. Includes mistaking the randomness in casinos for that in real life." (Antifragile, 2014, p. 429)
- "[T]he 'ludic fallacy' provides that there are inevitable imperfections in modeling the real world by particular models, and that unquestioning reliance on models like this theory, blinds one to their limits …" (Ohairwe, G., Basheka, B. C., & Zikusooka, C. M. (2015). Decision making practices in the pharmaceutical sector: Implications for Uganda. African Journal of Business Management, 9(7), 323-345.)
- "[T]he confusion of games or math models with real life. Only probabilities in games can be known a priori" (Torras, M. (2016). Uncertainty about uncertainty: the futility of benefit-cost analysis for climate change policy. real-world economics review, 11)
- "[T]he making of a false parallel between real-world risks with unknown outcomes and games of chance which are constructed to have known outcomes." (Kapardis, A., & Farrington, D. P. (Eds.). (2016). The psychology of crime, policing and courts. Routledge.)