ON THE RISE
Retractions represent a rare event in scholarly publishing, particularly when compared to the 1.4 million-odd journal articles that appear each year. But they are rising at a rate that far outstrips the increase in new papers. As
Nature reported in 2011, the number of retractions in 2010 was about 400, ten times the figure in 2001 (
30). That compares to an increase of just 44% in the number of papers published per year over that time period.
Why the increase? Almost certainly, the rise in retractions reflects greater attention to the veracity of published research and the growing use of software to detect plagiarism. At least one researcher notes that the increase may in fact be a good sign (
13). That heightened scrutiny is both the cause and the effect of another trend: a better understanding of why journals are pulling more papers (
28).
Even as recently as 2011, conventional wisdom held that the majority of retractions involved honest error (
27). But new scholarship indicates that misconduct is far more likely to play a role than previously believed. A 2012 paper in the
Proceedings of the National Academy of Sciences (PNAS) found that misconduct—plagiarism, data fabrication, image manipulation, and the like—were to blame for two-thirds of retractions (
15).
Why the disparity? The authors of the PNAS article reported that opaque retraction notices obscured the reasons behind such events, which prevented previous analyses from divining the real causes of withdrawn papers. The availability of details that journals were not including—some of which were provided by reporting on Retraction Watch—has allowed scholars to work around these ambiguous statements from journals. In contrast to the results discussed above, a study published the same month as the PNAS paper concluded that most retractions involved honest error, precisely because it relied solely on publisher-provided retraction notices (
17).
QUALITY VARIES
A 2014 study rated notices at 15 journals and found significant variations (
4), and as Wager and Williams concluded, “Journals’ retraction practices are not uniform. Some retractions fail to state the reason, and therefore fail to distinguish error from misconduct” (
31).
Resnik and Dinse (
24) found that many notices omitted any mention of fraud, despite official findings of same: “Of the articles that were retracted or corrected after an ORI finding of misconduct (with more than a one-word retraction statement), only 41.2% indicated that misconduct (or some other ethical problem) was the reason for the retraction or correction, and only 32.8% identified the specific ethical concern (such as fabrication, falsification, or plagiarism).”
And euphemisms—particularly for plagiarism—abound (
21), from “an approach” to writing to a “significant originality issue.”
VARY BY FIELD, COUNTRY
The rate of retraction by field varies a great deal. Retractions are quite rare in economics and business, for example (
19), despite the fact that economists commit misconduct at the same rate as everyone else (
23).
Lu et al. found that “biology & medicine and multidisciplinary sciences show the greatest retraction tendency (0.14 papers per 1000 publications)” (
20).
Italy has the highest number of retractions for plagiarism, according to one analysis, and Finland has the highest number of those for duplicate publications. But these results were not normalized for the number of papers published overall in those countries (
2).
There is some evidence that retractions may be more common in drug trials (
25), although a limited (and in our minds, flawed) study says that studies that include a disclosed medical writer have a lower rate of retraction (
33).
One thing seems fairly clear, however: retractions are more common in high-impact journals (
14). That may be due to a higher level of scrutiny, to more papers that push the edge of the envelope, or to other unknown factors.
MIXED EFFECTS ON CAREERS
Perhaps not surprisingly, retracted papers themselves see a 65% decline in citations in the short term (
16). But the effects on other papers by authors who retract are different depending on seniority.
A group of researchers at the University of Maryland, the University of Rochester, and Northwestern University analyzed the impact of retractions on future citations and found that an effect does exist—for the mid- and low-level scientist. For leaders in the field, the drop is minimal (
18). “Furthermore,” the group found, “the presence of coauthors with no prior publications predicts that established authors experience smaller citation losses.” Determining that the difference did not result from allocations of tasks or other procedural explanations, the authors concluded that the disparity reflects a form of the “Matthew Effect”: “Not only do the rich get richer, when riches are to be had, but the poor get poorer when catastrophe strikes.”
Retractions can claim innocent bystanders, too. Certain retracted articles—those involving misconduct, in particular—are linked to sagging citations and funding in related fields, with the former falling 5 to 10% (
3). “This citation penalty is more severe when the associated retracted article involves fraud or misconduct, relative to cases where the retraction occurs because of honest mistakes. In addition, we find that the arrival rate of new articles and funding flows into these fields decrease after a retraction,” the authors reported.
While researchers caught in widespread misconduct likely will need to start looking for work outside the sciences, retractions per se are not a career killer. The scientific community does not ostracize authors who retract—at least, those who seem to do so willingly. A study in
Scientific Reports in 2013 (
20) found that the authors of retracted articles do suffer a “retraction penalty”—a decline in future citations of their unretracted papers: “Citation penalties spread across publication histories, measured both by the temporal distance and the degrees of separation from the retracted paper. These broad citation penalties for an author’s body of work come in those cases, the large majority, where authors do not self-report the problem leading to the retraction.”
But authors who appear to be getting out in front of a problematic paper enjoy a different experience (
20): “By contrast, self-reporting mistakes is associated with no citation penalty and possibly positive citation benefits among prior work. The lack of citation losses for self-reported retractions may reflect more innocuous or explainable errors, while any tendency toward positive citation reactions in these cases may reflect a reward for correcting one’s own mistakes.”
In other words, as we have pointed out on Retraction Watch, “doing the right thing” by being transparent seems to generate good will among the science community even if the short-term cost is embarrassment.
Just as the effects of retractions on scientists are mixed, the effect of scientific miscues and misdeeds on the public also varies. Recent evidence suggests that research misconduct accounts for a relatively small percentage of total funding for science. An August 2014 article in
eLife by Stern et al. (
29) found that papers retracted as a result of misconduct “accounted for approximately $58 million in direct funding by the NIH between 1992 and 2012, less than 1% of the NIH budget over this period. Each of these articles accounted for a mean of $392,582 in direct costs (SD $423,256). Researchers experienced a median 91.8% decrease in publication output and large declines in funding after censure” by the Office of Research Integrity.
In spite of the lean state of federal funding for science and the fact that any dollar wasted on fraudulent research is too much, the Stern study does suggest that the public purse is fairly safe in that regard. On the other hand, one of his co-authors on the
eLife paper, R. Grant Steen, has traced misconduct to potential patient harm. In a 2011 article in the
Journal of Medical Ethics (
26) Steen reported that “Over 28,000 subjects were enrolled—and 9,189 patients were treated—in 180 retracted primary studies. Over 400,000 subjects were enrolled—and 70,501 patients were treated—in 851 secondary studies which cited a retracted paper.”
Steen found that 6,573 patients received treatment in studies that eventually were retracted for fraud. One 2001 article in the
Saudi Medical Journal included 2,161 women being treated for postpartum bleeding (
1). And while most of the papers Steen analyzed appeared in publications with low impact factors, likely minimizing their influence on future research, two appeared in
The Lancet and
JAMA, the latter a 2008 study of a purported breakthrough in the treatment of liver cancer that turned out to be bogus (
8).