In “On being a skeptic”, I said that skeptics look at evidence and make rational judgments based on the evidence:
We don’t say, “Bullshit!”, and we don’t say, “It’s a scientist saying it, so it must be true.” We look at the evidence.
I note that non-skeptics — or those who style themselves as a different kind of skeptic — also put forth “evidence” and claim support from it. But their evidence turns out to be faulty, and, often, the fault is in their reliance on one of more logical fallacies. They’re using faulty logic, which is generating faulty evidence, which is supporting... not any judgment they might make from what they see, but what they’ve already determined they want to believe.
The series so far:
I thought I’d have a look, in a series of posts, at some of the common logical fallacies. I’ll start with the one that’s perhaps the hardest for us to avoid, one that most of us have to work hard not to fall into ourselves:
Simply put, confirmation bias is seeing what you want or expect to see, and ignoring what contradicts it. When you get stopped at a traffic light, and you say, “This light is always red when I come to it,” that’s probably confirmation bias. When I blogged, a couple of years ago, about how I seem to see 11:11 or 1:11 on my clock more often than I statistically should, that was confirmation bias. I know about confirmation bias, and it’s still hard for me to see 11:11 on the clock and not think, beneath my smile, “See: there’s something to this.”
But to those who aren’t aware, or who deny it, confirmation bias is a huge trap. It’s what makes us remember all the stories about bogus remedies that “worked”, because we forget the times when they didn’t. It’s what makes us certain that prayer works, because we ignore all the cases when it doesn’t. When we expect to see God everywhere, confirmation bias has us see Jesus and Mary in cheese sandwiches, wall plaster, and cow patties. When we think a psychic can really see things, or that astrology is predictive, we find sense and truth in their vague, equivocal statements through confirmation bias.
That’s where the scientific method comes in. Science has us control a study, record all observations, and then see if there’s really support for our thesis. Record the state of the traffic light every time, and see if it’s really red more often than not. Record the time whenever you look at the clock, and see how much 11:11 really does come up. Show people a bunch of random images, and measure their inclination to find Jesus in them... then see what else that correlates to.
It’s also important to remember that confirmation bias fools even the wary, sometimes in subtle ways. Its biggest danger is that it leads us to confirm our hypotheses, rather than to truly test them with counterexamples. When we design studies, we must make sure we take that into account, putting in sufficient challenges, as well as supporting cases. For example, if our hypothesis is that plant X is only found near water, we can’t test that hypothesis by only looking near water; we also must look where we don’t expect to find it, to make sure that we don’t find it there.
Want some real examples of confirmation bias? Here’s an item from the NY Times a year and a half ago, about the taste of wine:
Note that the study isn’t working on subjective reporting: their brains are actually responding differently, depending upon how much they think the wine costs. This is really wired into us.
But assuming for the moment that it’s true that most drinkers prefer the cheap stuff, why does anyone bother buying $55 cabernet? One answer is provided by a second experiment, in which presumably sober researchers at the California Institute of Technology and the Stanford Business School demonstrated that the more expensive consumers think a wine is, the more pleasure they are apt to take in it.
The researchers scanned the brains of 21 volunteer wine novices as they administered tiny tastes of wine, measuring sensations in the medial orbitofrontal cortex, the part of the brain where flavor responses apparently register. The subjects were told only the price of the wines. Without their knowledge, they tasted one wine twice, and were given two different prices for that wine. Invariably they preferred the one they thought was more expensive.
And this item just appeared in New Scientist the other day. This one is subjective, but still interesting.
Sixty people in turn were shown the same video clip on the same television. Half were told to expect clearer, sharper pictures thanks to HD technology: an impression backed up by posters, flyers and the presence of an extra-thick cable connected to the screen. The other half were told to expect a normal DVD image.
Questionnaires revealed that the people who had been led to expect HD reported seeing higher-quality images. “Participants were unable to discriminate properly between digital and high-definition signals,” says Lidwien van de Wijngaert at the University of Twente in Enschede, the Netherlands, who carried out the study with colleagues from Utrecht University.
In fighting confirmation bias, double-blind tests, randomized data, and peer review are your friends.