Talk:Confirmation bias/GA1

Latest comment: 14 years ago by MartinPoulter in topic Final review

GA Review edit

Article (edit | visual edit | history) · Article talk (edit | history) · Watch

Reviewer: Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply

In many ways, this is a comprehensive and impressive article, and a lot of admirable work has gone into collecting the information. Unfortunately, the presentation of this information is flawed. It is not adequately organized, and the clarity of the writing suffers in some sections. I'm not going through the GA criteria in detail because I think that there are some issues that need to be addressed first. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply

Organization edit

The article is poorly organized:

  • Some section titles are uninformative. Terms like "positive test strategy" are completely meaningless to the layman. Even those terms that have been introduced in the lead, such as assimilation bias, might be replaced by more accessible language. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • Some sections are dependent on other sections (e.g., the opening sentence of "Biased hypothesis testing" assumes that the reader has read the "positive test strategy section"). Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • There needs to be some higher-order organization in this article! In particular: Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
    • Some sections address or confuse multiple issues. The positive test strategy section addresses both (1) the way that people collect evidence and (2) whether confirmation bias is a genuine bias. Other topics (confirmation bias in hypothesis testing) span multiple sections. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
    • Sections like "Belief perseverance" need to be distinguished from sections like "Selective memory." Belief perseverence is a potential consequence of confirmation bias. Selective memory is a potential explanation or type of confirmation bias. These are both subsections of larger categories.
    • Personally, I think that the interpretation of confirmation bias deserves its own section. It would be nice to highlight the way that Klayman & Ha frame the problem of hypothesis testing because it questions whether the confirmation bias is a genuine bias. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
    • One possible higher-order organization: Variants of confirmation bias (with subsections for collection of data, interpretation of data, etc), explanations of confirmation bias (selective memory, etc), and Rationality of the confirmation bias. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • The informal observation section feels thin and a little out of place. I suspect that one could add quotes to this section indefinitely. Why were these particular quotes chosen, and what do they contribute to the article? Perhaps this material is better used to illustrate the confirmation bias in the lead. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
In reply to the question about informal observation, all or nearly all of the key sources that article is based on (the textbooks and review paper) mention that Bacon and others observed confirmation bias centuries before it was explored experimentally, and the Bacon quotes used in this article are taken from there. This information provides a context for why the early Wason experiments were hailed as demonstrating confirmation bias when in fact they didn't.
I don't think there's a need for a "rationality" section, but I accept that what (for now) I've called the History section can be cut down or merged into other sections. MartinPoulter (talk) 10:34, 12 October 2009 (UTC)Reply

Writing edit

The article is poorly written in some sections:

  • The assimilation bias, attitude polarization, and primacy effect sections are poorly written and I cannot follow all of the methods. Furthermore, they often degenerate into lists of experiments, with little effort to form the experiments into a coherent argument or narrative. In some cases (e.g., primacy effect), the finding is never explicitly linked to confirmation bias. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
    • Various small issues (e.g., starting a sentence with a numerical 23 in the attitude polarization section). I won't go into all of the details here just yet, because I think that many of the problems will be obvious Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • Complexity vs. specificity in the 2,4,6 task. The rules that participants propose are variously described as more complex and more specific than the target rule. Both are true, but what is the relevant difference? According to Klayman & Ha, it is specificity. For clarity, I would frame the differences only in terms of specificity (i.e., avoid highlighting the differences in complexity). Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • Biased hypothesis testing section: It'd be nice to elaborate why subsequent studies are more definitive than Wason's original. Explain why Klayman & Ha's criticism doesn't apply to them. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply
  • Consider when it is necessary to identify the names of the researchers and dates. When would the average reader need to know who performed a particular study and when it was conducted? Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply

This has the potential to become a very strong article, but it requires better presentation and organization. Neramesh (talk) 18:51, 4 October 2009 (UTC)Reply

Thanks very, very much, Neramesh, for taking the time to write this extremely thoughtful and helpful review. I think the concerns you raise are addressable, and it's useful to have a way forward spelled out. Maybe I've targetted the article at psychology undergraduates rather than lay readers. This is my first GAN, so I'm still getting a sense of what tone to aim for. I hope to attempt an overhaul this week. MartinPoulter (talk) 11:24, 5 October 2009 (UTC)Reply
Sorry for the delay: had much less access to WP this last week than anticipated. For researcher names, I will try only to use them when the researchers' opinions or conclusions are being described, and perhaps also when a key study is referred to in multiple sections. The "lists of experiments" point is a good one: I've tried not to draw experiments together into a general conclusion, because I'm wary of making an original synthesis, but as you say the result is presently too much like a list. MartinPoulter (talk) 10:38, 12 October 2009 (UTC)Reply
The criteria that you propose sound reasonable. I should qualify my previous complaint because it is often appropriate to use researcher names. For example, Wason should definitely be mentioned. Only some of the names in the article (e.g., Clifford Mynatt) seem unnecessary. If you decide to leave some (or even most) of the names in, then that is ok as long as (1) there is a rationale behind including those names and (2) the rationale is consistently applied (e.g., why name Mynatt but not Shafir). Neramesh (talk) 17:04, 12 October 2009 (UTC)Reply
About dates, I'm not aware of any policy on this (obviously the dates should be, and are, made clear in the references). Putting the date after every mention of a study could interrupt the flow of the article. However, most of the article deals with how the theory of confirmation bias has changed over time, so it is important in most cases to say when a particular experiment was conducted. MartinPoulter (talk) 11:24, 12 October 2009 (UTC)Reply
I agree that some chronology helps the narrative, but the year of publication is often less important than the relationship between the studies. If Study 2 qualifies Study 1, then it is better to introduce Study 2 as a response to Study 1 than to provide the years of publication for both studies. Providing the years leaves the relationship between the studies implicit and incorrectly implies that the particular years are important. If the year itself is important, include it, but if the relationship between studies is what is important, then it is better to specify the relationship and leave the years out. (examples where year might not be needed: 1983 child custody case, 1986 study that looked at biased weighting of evidence, Lord, Ross & Lepper) Neramesh (talk) 17:04, 12 October 2009 (UTC)Reply
That said, some years of publication will be important. Since Wason's study marks the beginning of scientific research on the confirmation bias, the exact year of that study is important. Neramesh (talk) 17:04, 12 October 2009 (UTC)Reply
I understand and agree with all your above comments. This process has helped me see much more clearly what needs to be done, so thanks for your continuing involvement. I'm inspired by your comment that it has the potential to be a strong article:that's what I was hoping for, and that's what I'll work towards. MartinPoulter (talk) 16:40, 13 October 2009 (UTC)Reply
Also, I'm new to wikipedia and this is my first time reviewing an article, so please feel free to question my suggestions. Neramesh (talk) 17:04, 12 October 2009 (UTC)Reply

Next steps edit

I've done a substantial reorganization and rewrite in response to the concerns raised, although I'm still not happy with how the positive test strategy material is shared between the "Types", "History" and "Explanations" sections. Frankly, I'm hitting a bit of a mental block right now. It would be good if Neramesh could take another look and say what further improvements, major or minor, are necessary. MartinPoulter (talk) 15:23, 17 October 2009 (UTC) I'm considering getting rid of the set-theoretic diagrams. Although they illustrate the logical issues with a positive test strategy, the point they are trying to make is more abstract and difficult than the rest of the article, and arguably they are out of place in an article that's aimed at a general audience. I welcome comments on this idea. MartinPoulter (talk) 15:28, 17 October 2009 (UTC)Reply

Comments on revised version edit

General comments edit

The article is much improved. I am still requesting a few changes, but I think that you can bring it to good article status very soon. I have listed the changes that I believe are necessary for good article status under "Required changes."

In the "Other comments" section, I make some additional suggestions. These suggestions don't need to be followed in order to reach good article status, but I have provided them in case you wanted to continue working on the article past good article status. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply

Required changes edit

  • Turn the explanations section into prose. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • Restrict the substance of Klayman & Ha's argument to one section (see longer comments below for one idea how to do this). Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • A lot of the lead is very good, but given its importance, I think that it needs a little more attention
    • Calling the confirmation bias "irrational" might be too strong in light of Klayman and Ha. I would feel more comfortable with "tendency" instead of "irrational tendency." The lead should also mention that the irrationality of the bias is disputed. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
    • Remove "behavioral confirmation effect" or move it out of the lead (or at least in a less prominent position. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
    • In what situations do people test hypotheses in a genuinely informative way? If they often do so, then this qualification needs to be presented earlier in the lead and made more specific. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply

Klayman & Ha edit

  • Klayman & Ha should be featured prominently and should be alluded to in the lead. For many cognitive psychologists, Klayman & Ha is the paper on confirmation bias and hypothesis testing. It dismantled Wason's argument, undermined the falsification view of hypothesis testing, and foreshadowed both rational analysis and the Bayesian approach - approaches that have recently become dominant in cognitive science. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • I don't know where it is best to put Klayman & Ha, so I'm ok with whatever you decide. However, I do have a suggestion: I would recommend placing Klayman & Ha almost entirely in the explanations section (and not in the biased search for information section or history sections). The primary risk of this approach is that it might bury the argument too deep in the article and make the rest of the article misleading. To hedge against this risk, it would be necessary to present the outline of the argument in the lead (or at least mention that its irrationality is disputed). In this way, all readers would be aware that there is debate about the rationality of confirmation bias, and it would be ok if the search for information section didn't address it in detail. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
    • Another disadvantage of this organization is that multiple sections will require that the reader be familiar with Wason's 2-4-6 task. I would recommend featuring Wason's 2-4-6 task in the lead. The primary reason for this is that it appears in multiple sections, but given the historic importance of Wason's study, the placement might be justified anyway. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply

Other comments edit

  • I checked the Manual of Style, and the bolding of italicizing of terms is recommended. I find it a little disruptive, but it is ok if (and perhaps necessary that) it stays in. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • I don't like the amygdala image: I don't think it illustrates anything important Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • I think that the Venn diagrams are helpful. I would also consider adding the T is a subset of H diagram, which really makes the point (i.e., it shows that sometimes positive tests are the only way to falsify a hypothesis). The T subset H diagram is also helpful in that it has a small T and fits well with the rarity assumption. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • You have done a great job cleaning up the writing. You may want to bring in a copy-editor at some point, but I have no problem moving this up to good article status as it is currently written. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • The sections feel less like lists of experiments, but additional framing might be helpful. For example, do the various studies of biased interpretations all tell us the same thing? If so, then is necessary to discuss the methods of each in detail? If not, then would be helpful to highlight the unique contributions of each study? Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
    • For now, it is great that you have all of this content and detailed methods. To make the article truly great, however, it is necessary to carefully consider where the details are necessary and where they are not. Having too many details can obscure the point. I suspect that as this article evolves it will become more focused and start to drop some of the details. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
  • consistency in terms: assimilation bias vs. biased interpretation. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply
    • also, myside bias is used a few times in the consequences section. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply

You have done a lot of really great work on this article, and I'm looking forward to making it an official good article soon. Neramesh (talk) 06:34, 24 October 2009 (UTC)Reply

You've made my day with this! I look forward to getting some serious time to work on the article in the next few days. At a quick glance, all your suggestions seem sensible and should be implemented. There's an issue about the relation between positive tests and conf bias that I will try and spell out in talk because the different sources use this terminology in slightly different ways and I'm not sure how to address this in the article. Yes, I do hope to take the article beyond GA, so all your comments are valuable. Cheers, MartinPoulter (talk) 10:49, 24 October 2009 (UTC)Reply

Response: rationality edit

What I wanted to say about rationality is, that it seems a common trend across the literature on biases to define biases as irrational (in specific contexts) effects that often result from heuristics that are rational (under certain constraints). The major sources for this article take this approach. Klayman & Ha is also an example of this, e.g. "many phenomena labeled "confirmation bias" are better understood in terms of a general positive test strategy." I don't take them as showing that confirmation bias is sometimes rational. Instead, they showed that the "confirmation bias" results didn't necessarily demonstrate confirmation bias itself, but proved that subjects were using the PTS heuristic. They and other authors (Oswald & Grosjean, Nickerson, Poletiek) argue that positive test strategies sometimes lead to confirmation bias effects. Note that the two things are distinguished, although one can lead to another.

Therefore: Confirmation bias should still be defined as irrational: like the other cognitive biases, it biases people away from a rational optimum. There isn't a controversy about whether conf bias is irrational, just about whether certain behaviour counts as conf bias. An exception to this is the Trope & Bassok rational choice model. Still, that's a minority view which just gets a mention in one source, and most of the effects discussed in the article at the moment are unambiguously irrational even in their terms.

BTW, I've noted that the Kunda book uses "one-sided testing" and this strikes me as a good phrase to use in the article, probably replacing the term "positive test strategy" in some places. Still getting little drips of time to work on the article, but will have a day to make serious changes within the next week. MartinPoulter (talk) 13:43, 27 October 2009 (UTC)Reply

This is a deep issue and confirmation bias is probably irrational at some level. Still, it is very easy for someone to misinterpret the term irrational in this article, because there are many ways in which a positive test strategy is the most rational strategy. Calling the positive test strategy irrational implies that people are somehow making a mistake. However, Klayman & Ha showed that people cannot know whether a positive test or a negative test will provide information/produce disconfirmation. If you do not know which test will be informative, then choosing one test can hardly be called an irrational mistake: there simply isn't any basis for knowing which test will provide information without making assumptions about the rarity of the rule. In other words, the "rational optimum" cannot know which tests will provide information without making assumptions about the rarity of the rule. We can't call people irrational if the "rational optimum" can't do any better.
When someone makes assumptions about the rarity of the rule, it becomes possible to decide whether a positive or negative test is more informative. If the target rule is rare, then it is rational to use the positive test strategy. Furthermore, there is reason to believe that rules typically are rare, and that people make this assumption (Craig McKenzie has done some research on this question). There is also some evidence that people will switch to negative tests when they know that rules are not rare (I think that Oaksford & Chater have done some work on this on the selection task).
Oaksford & Chater (2009) make a similar argument in more detail, mostly with respect to the selection task. It is a Behavioral and Brain Sciences paper that would provide a good introduction to this debate and the Bayesian perspective of rationality. Neramesh (talk) 16:32, 27 October 2009 (UTC)Reply
I think in haste you may have misread what I wrote above. "Calling the positive test strategy irrational"? Note that I haven't done this in this Talk. I'm saying that conf bias is by definition irrational, but PTS is a heuristic which can lead to conf bias; a heuristic which is itself, on the whole, rational. I think I'm following Klayman & Ha's usage (and a general theme in heuristics & biases literature) in distinguishing the irrational bias from the rational heuristic. In fact, it seems we're all in agreement. I think the section on hypothesis testing in the article needs to make this distinction more clearly: that people are known to make one-sided tests, that this isn't necessarily a bias in itself, but some outcomes of PTS can be considered biases. I'm familiar with the Oaksford & Chater rational analysis stuff. MartinPoulter (talk) 16:47, 27 October 2009 (UTC)Reply
I see your point better now, but I still disagree. In particular, I don't think that confirmation bias is irrational by definition. The definition of confirmation bias requires that the tests being performed are uninformative (i.e., tests that won't disconfirm the hypothesis), but uninformative tests are not necessarily irrational. In the case of confirmation bias, the tests are not irrational because there is no way of knowing that the test is uninformative. Although people clearly have a bias to perform tests that only provide confirmation in some situations, you'd have to show that they could know that the tests are uninformative in order to convince me that the behavior is irrational. In general, this isn't the case. In confirmation bias, people are performing the wrong test by definition, but performing the wrong test isn't irrational unless it is possible to know what the correct test is.
To present a (very) crude analogy, consider someone who is attempting to predict whether a coin lands heads or tails. If the person predicts heads each time that you ask him to make a prediction, he will be wrong 50% of the time. However, few people would call his predictions irrational. Incorrect predictions are not irrational unless there is a basis for making better predictions. Neramesh (talk) 19:49, 27 October 2009 (UTC)Reply
In my previous criticism, I used the term "positive test strategy" and falsely implied that you called it irrational. However, the substance of the argument still applies if you substitute "confirmation bias" for each instance of "positive test strategy." —Preceding unsigned comment added by Neramesh (talkcontribs) 19:48, 27 October 2009 (UTC)Reply
I see your point now. Checking the references, they don't actually use the word "irrational"; similar words, but not "irrational". So I'll remove that word from the article. I think there is scope in the historical section of the article for a paragraph about the normative models used; the shift from deductive falsification to induction and Bayesianism.
I'm still not happy with the idea of saying that the rationality of conf bias is disputed. That could be potentially confusing. The situation is complicated by the fact that "confirmation bias" is often used for a number of different effects, not just in hypothesis testing. Better to just describe the observed behavior, and point out where it definitely clashes with normative models. That's what the sources do. MartinPoulter (talk) 11:31, 28 October 2009 (UTC)Reply

There is room to compromise here: you don't have to say that the rationality of confirmation bias is disputed. I'd still strongly prefer that the term "irrational" is avoided, but it is appropriate to state that confirmation bias leads to serious mistakes in many contexts. This approach sidesteps the issue of rationality while still emphasizing that confirmation bias is something that people should be worried about.

Of course, there should also be some indication that the confirmation bias arises from a strategy that is sensible in a broader context. However, I'll leave the exact phrasing up to you. If you don't want to call it rational, then adaptive may be a suitable substitute. Or you could just state the facts: confirmation bias usually involves testing positive instances of the hypothesis - a strategy that is appropriate in many situations. This isn't going as far as I would, but it represents a reasonable framing of the issue.

As an aside, while I tend to consider the positive test strategy to be relatively rational, there are some results that are clearly not rational. When the phrasing of the question leads people to switch tests, it is probably in violation of all normative standards. Neramesh (talk) 15:22, 28 October 2009 (UTC)Reply

Sorry this is taking so long and thanks for your forebearance. I have a good idea how to rewrite the section on hypothesis-testing, but am finding it hard to get serious wiki-time around my day-job. I expect to really engage in the next few days. Something I could do with guidance on (from anyone) is that now I've moved some content out of the lede, the first two sentences are a bit similar to each other. Should I worry about the redundancy? Perhaps the initial sentence could be more succinct without dividing conf bias into three areas, which is ably done by the next sentence. MartinPoulter (talk) 13:55, 11 November 2009 (UTC)Reply

New revision edit

Okay, at last there's been another overhaul, trying to address the above suggestsions:

  • I've moved the "self-fulfilling prophecy" mention later in the lede, but not removed it. My rationale is that it serves to define conf bias by explaining what it isn't and could be confused with. If the mention is confusing for readers, then I suppose I could be persuaded to remove it, but at the moment I think it does more good than harm.
  • There's a mention of Klayman and Ha in the lede, but I don't want to put too much detail about that up-front as I think it makes it less introductory. I've followed the Oswald and Grosjean (2004) approach through most of the article. They stress that although "confirmation bias" originated with Wason, the phrase is now used for a wider range of tendencies, only some of which bear on hypothesis-testing (and in fact confirmation bias is hardest to demonstrate in the context of hypothesis-testing). If we had been writing an article about this topic back in the '90s, then I would have said the Wason tasks and K-H have to take centre stage, but in light of how Oswald & Grosjean, Kunda and Fine present it I think differently now. I've opted to put K-H mainly in the History section, but with a mention under explanations. I recognise that K-H has a wider importance in terms of introducing rational analysis, but I don't think mentioning this in the article would serve the purpose of introducing general readers to confirmation bias.
  • Further comments welcome as always!

MartinPoulter (talk) 18:34, 15 November 2009 (UTC)Reply

Final review edit

I am passing the article. Congratulations Martin!

I believe that it meets all of the criteria for a good article, and it is particularly impressive in its scope. Neramesh (talk) 06:37, 16 November 2009 (UTC)Reply

My first GA! I'm joyous! Thanks so much: the article has been really transformed by your involvement. I hope you will continue to be a reviewer and we'll get more psychological articles up the quality scale. MartinPoulter (talk) 12:49, 16 November 2009 (UTC)Reply