This would lead us to avoid statements like “men are happier than women”, or “boys don’t respond to sounds as rapidly as do girls”, or “Asians have a more collectivist mentality than Europeans do"” — or “the brains of violent criminals are physically and functionally different from the rest of us”. At least, we should avoid this way of talking about the results of scientific investigations.
The reason? Most members of the general public don’t understand statistical-distribution talk, and instead tend to interpret such statements as expressing general (and essential) properties of the groups involved. This is especially true when the statements express the conclusions of an apparently authoritative scientific study, rather than merely someone’s personal opinion, which is easy to discount.
The problem, in case you don’t see it from what’s quoted above, is this (I’m going to make some details up, just to give an example):
Suppose some researchers do a study in which they ask people how happy they are, on a scale of 1 to 10. Suppose that they ask 50 men and 50 women, and the average happiness rating for the men is 7.3, while the average score for the women is 7.1. Now suppose that the study is reported in the news with the statement that “men are happier than women.”
Or let’s be even more straightforward: suppose the 50 men and 50 women are simply asked, “On the whole, are you happy?” 37 of the men and 36 of the women say, “Yes.” And the newspapers report that, according to a recent study, “men are happier than women.”
Of course, George reads that over his morning coffee, and says, “Hey, Martha. It says here that I’m happier than you. Ha! I always knew there was something wrong. Maybe you need some of that Prozac stuff.”
But we can’t generalize a finding based on average aspects of a group... to particular individuals in the general population. Martha may be far happier than George, and the study doesn’t say otherwise. George just doesn’t understand.
Of course, the problem isn’t limited to generic plurals with no statistics behind them. We could report that a study shows that “men are 50% more likely than women to get into traffic accidents,” but that wouldn’t mean that I am 50% more likely, just because I’m a man. There are other reasons, which the study might or might not go into, that are the causes of the difference, and the study just shows one correlation.
So it’s important to word these reports in a way that doesn’t invite that sort of misinterpretation. It’s important for a number of reasons:
- The media already often get the details wrong in reporting scientific studies. It makes it worse to compound that with confusing reporting.
- The media often highlight the wrong bits, in efforts to get catchy headlines and “interesting” copy.
- Readers don’t understand statistics, and misinterpretation is likely even when the stats are there. Don’t make it worse by eliminating them.
- Readers are prone to generalize results beyond what’s valid, and they’ll likely apply a group trent to specific individuals, as in the example above.
- Readers don’t understand the limitations of studies. Reporters should try to talk about one or two key limitations.
In The Telegraph’s competition, The Guardian, Ben Goldacre seemed to enjoy tearing the former’s report apart:
Women who drink alcohol, wear short skirts and are outgoing are more likely to be raped? “This is completely inaccurate,” Shaw said. “We found no difference whatsoever. The alcohol thing is also completely wrong: if anything, we found that men reported they were willing to go further with women who are completely sober.”
We often say that the public needs to be better educated with respect to science and critical thinking. This is a good place to start... and the news media need to be among the educators.