02 May 2006

But I Don't Have Any Peers!

An article in today's NYT treats the problems with scientific peer review as if it's somehow newsworthy. This was obviously a slow news day… although the article will prove a feast for some somewhat sophisticated lawyers.

Because findings published in peer-reviewed journals affect patient care, public policy and the authors' academic promotions, journal editors contend that new scientific information should be published in a peer-reviewed journal before it is presented to doctors and the public. That message, however, has created a widespread misimpression that passing peer review is the scientific equivalent of the Good Housekeeping seal of approval. Virtually every major scientific and medical journal has been humbled recently by publishing findings that are later discredited. The flurry of episodes has led many people to ask why authors, editors and independent expert reviewers all failed to detect the problems before publication.

Lawrence K. Altman, "For Science's Gatekeepers, a Credibility Gap" (02 May 2006) (fake paragraphing corrected). The key passage, though, occurs later on—and reflects a rather serious misunderstanding of the scientific method.

In the April 18 issue of Annals of Internal Medicine, its editor, Dr. Harold C. Sox, wrote about lessons learned after the journal retracted an article on menopause by Dr. Eric Poehlman of the University of Vermont. When an author is found to have fabricated data in one paper, scientists rarely examine all of that author's publications, so the scientific literature may be more polluted than believed, Dr. Sox said. Dr. Sox and other scientists have documented that invalid work is not effectively purged from the scientific literature because the authors of new papers continue to cite retracted ones. When journals try to retract discredited papers, Dr. Sox said, the process is slow, and the system used to inform readers faulty. Authors often use euphemisms instead of the words "fabrication" or "research misconduct," and finding published retractions can be costly because some affected journals charge readers a fee to visit their Web sites to learn about them, Dr. Sox said.

Id. (fake paragraphing corrected).

Have you spotted the logic error inherent in this passage? It's a pretty simple one—a false dilemma. The article assumes that there are only two kinds of articles:

  • The Good Article based upon spotlessly clean and replicable research results and full theoretical consideration
  • The Bad Article based upon intentional misconduct that can be traced to one or more specific contributors to the article

The qualifier in the second descriptor is what is important. Frankly, there's more than one kind of "Bad Article"—there's the Bad Article that is just bad science (such as the "hereditary SIDS" issue referred to earlier in that article that resulted not from misconduct, but from inadequate sampling methods), there's the Bad Article resulting from simple mistakes in measurement, there's the Bad Article resulting from honest misinterpretation of data. These Bad Articles are just as potentially harmful as the Bad Articles resulting from intentional misconduct. Dr Altman's piece, though, neglects these other possibilities, and presumes to judge all retractions that don't point a finger of blame for fraud at a specific party as inadequate. Frankly, from a scientific perspective the statement

This journal retracts Article X. Further examination has disclosed that the {data or analysis} cannot be replicated under appropriately controlled conditions. Please do not cite or rely upon Article X for any purpose.

is certainly specific enough for a scientist. And whining about how hard it is to disseminate these notices, in this day of virtually every scientific journal on the face of the planet having its own website and e-mail list—not to mention the ability to annotate online indices—seems so 19th-century that I'm surprised it even appears.

What I find interesting is that this particular attack on peer review focuses almost exclusively on the hard-core life science journals, when other peer-reviewed journals in other fields are at least as likely to encompass the same problems, and often with equally severe consequences. Consider, for example, a hypothetical academic piece asserting that the Statute of Anne merely codified a natural-law vision of author's rights.1 Now watch what happens as that article and its conclusions filter into a revision of US copyright law. Does it result in potential loss of life? Probably not; at least, not this time. Work on crime causation, though, very well might if it incorrectly and improperly influences public policy. Yet nobody is objecting to the abominable methodology and skewed data sets concerning, say, the relationship between gun control and reported crime, or at least not objecting strenuously enough to create the kind of "controversy" Dr Altman tries to depict in medical sciences.


  1. Yeah, right. So it isn't hypothetical; it is instead merely prudence that keeps me from citing it.