Beset with personal and professional obligations of late, I’ve been shamefully absent from this blog for so long now that all of you have forgotten me altogether. The current debate about the famed Lancet article is a fascinating one, however, and it seems as good a time as any to dive back in.
Applying rigorous statistical methodology in the soft sciences is always fraught with challenge. That’s not to say that it can’t be done effectively (it can, and it should!) but it’s very much a different ballgame. My training was in the hard sciences, where the application of statistical analysis is fairly straightforward. When my wife, the social scientist of the household, asks for statistical help in her research, I always find it more daunting than quantifying (say) nuclear reaction rates.
This particular study has been a lightning rod for controversy since its publication, as partisans from both sides of the divide bring their own ideological biases into the debate, and the whole discussion devolves into a meta-argument that has more to do with the political axes the participants have to grind than with the article itself. We saw it clearly here on this blog, as intense passions were inflamed over an argument about statistics.
For my part, I found the Lancet paper itself largely unremarkable, in both its methodology and its conclusions. I think the researchers did a reasonable job considering the nature of their study. At the end of the day, they essentially said they had a 95% confidence level that the number of civilian deaths fell within the range of 8,000 to 198,000.
Granted, those are large error bars, but that often happens in scientific research. The real crime in the reporting of these findings came from ignoramuses who mindlessly took the unweighted mean of these two figures, which led to countless media headlines screaming “100,000 Iraqi Dead!” and provided the anti-war left with a convenient cudgel of a talking point, lent undue legitimacy by Lancet’s respected status within the world of medical journals.
I think that’s what really got Kaplan’s drawers in a knot, and understandably so. I do think many of Kaplan’s criticisms of the report’s methodologies missed their mark, but I also think Adam was too quick to apologize for linking it into this discussion. Scientists are a cantankerous lot, and as someone who has refereed scientific papers for peer-reviewed journals myself, I can assure you that arguments very similar to Kaplan’s were given full voice behind the editorial scenes before the final draft was published. They were an integral part of the discussion then, and there’s no reason they should not be part of the dialog here as well.