A recent article in the Proceedings of the National Academy of Science (PNAS) examines the cause of retractions involving more than 2000 articles published in biomedical and life-science-related journals.1 Of these, nearly 70% were retracted due to author misconduct, with the most common problem being suspected fraud (43.4%), followed by duplications and plagiarism. When compared with data obtained in 1975, the incidence of misconduct-related retractions has increased 10-fold.
Overall, misconduct-related retractions involve only a tiny portion of the more than 25 million articles housed on PubMed. The issue is not that the number of retractions is small but that their number is increasing considerably and rapidly. Exactly how many articles are retracted due to misconduct is difficult to establish as published retraction notices are often vague and unclear as to the cause of the problem (estimates place the figure at 0.2% of 1.4 million articles annually published). Whereas from 2002 to 2006 fraud-related article retractions were 20% higher than error-related ones, from 2007 to 2011 error-related retractions were less than 40% of fraud-related ones.1
It is hard not to point fingers; most fraud-related retractions come from our own backyard: the United States (probably reflecting the fact that about 26% of all scientific publications originate here). China and India account for the most retractions due to duplications and plagiarism (probably reflecting difficulties with the use of English). China's share of scientific publications went up from 4.4% to 10.2% from 2003 to 2008, while those from the United States and United Kingdom went down, positioning China to become the largest source of origin in science in the near future.
Another interesting observation made by Fang et al1 refers to the quality of journals in which most retractions happen. There was a direct correlation between Impact Factor (IF) and retraction numbers. Prestigious journals such as Science (IF: 31.2), PNAS (IF: 9.68), and Nature (IF: 36.28) have the most retracted articles, while 16 journals with IFs less than 3 (as is the American Journal of Neuroradiology [AJNR]) had none (in my time as Editor-in-Chief, only 1 AJNR-related article had to be retracted, and this was actually done by another journal because the original appearance of the duplicated article was published by us). Because retractions are generally initiated by journal editors and some may not wish to accept the mistake of publishing a fraudulent article, many articles that should be retracted are not and remain viable and gain citations. Thus, the current number of retractions is probably underestimated.
Stephen Breuning was the assistant director of the largest institution for the mentally impaired in Pennsylvania. In 1983, it was discovered that he falsified data presented in a symposium abstract, which led the National Institute of Mental Health to review his publications, reaching a conclusion that 24 of 25 were fraudulent.2 Surprisingly, only 3 were retracted at that time, and 24 years later, a study showed that they continue to be cited even by prestigious journals such as the British Journal of Psychiatry (IF: 6.61).3 In another study, 235 retracted articles accumulated 2034 citations, and depending on how the data were analyzed, the retractions were acknowledged in only 6.4%–7.7% of the journals.4 Also, “infamous” articles may be quoted more often than “famous” ones. On the Scholarly Kitchen Web site* (http://scholarlykitchen.sspnet.org), Kent Anderson said about retractions: “In high impact journals, there is no reason to believe that these citations don't contribute their fair share to the impact factor. After all, an infamous paper may be more readily cited because it's top of mind for a busy author.”5 (This is a common joke among journal editors: if you want your IF to go up, publish a fraudulent article!)
If we have the IF, the h-index, and other metrics, why not have a retraction factor? Drs Ferric Fang and Arturo Casadevall, editors of Infection and Immunology (IF: 4.16) and mBio (IF: 5.3), respectively, set out to do this. Fang and Casadevall6 simply took the number of retractions per journal from 2001 to 2010 and divided it by the total number of articles appearing on PubMed during the same period. Because the number of retractions tends to be small, they multiplied their results by 1000 to obtain whole numbers. This recent article pointed out again that retractions occur more often in higher IF journals. In a different article, the policies on retractions found in major biomedical journals were studied.7 For this investigation, the author selected the 122 journals with the highest IFs and found that 62% did not have a formal policy regarding retractions (AJNR does and it can be found at http://www.ajnr.org/site/misc/ifora.xhtml#dupl). In August 2012, The Scientist published an opinion piece calling for a “transparency index” similar in spirit to the IF.8 The authors suggested that this index should include the following: the article review protocol of the journal (AJNR has one), whether underlying data are made available (AJNR does not unless something is called into question), whether the journal uses plagiarism-detection software (yes, AJNR does this), whether a mechanism for dealing with fraud issues exists (see the Web address above for the policy of AJNR), and whether corrections and retractions are as clear as possible (I believe ours are).
Older research told us that retractions took some time to take place; an observation that no longer holds true as seen in a recent investigation.9 In that study, the entire universe of biomedical literature between 1972 and 2006 was examined. While other investigations have used loose controls, in this one, the authors chose as controls only articles published immediately before and after a retraction and in the journal where the fraud had occurred. Let me spend a few lines here because their results were very interesting. They found that most fraudulent articles were authored by top researchers at US universities and that retracted articles were likely to be highly cited in their first year. However, they also found some good and honest things that happen after retractions: The system is fast with nearly 50% of retractions occurring 2 years postpublication (and it seems that this delay is getting shorter with time), retractions are unbiased, and the effects of retractions are severe and long-lived (citations for retractions were down 72% by 10 years). Most retractions are American articles, and the number of retractions reported by journals published outside the United States is small. Does this imply that research done elsewhere is more honest? I believe that this is not the case and that foreign journals perhaps have less well-established policies and procedures on retractions and/or pay less attention to this problem.
If you want to be entertained (not to say amazed or even disgusted) by the retraction epidemic, I suggest visiting the Retraction Watch Web site (http://retractionwatch.wordpress.com). This is a moderated blog that reports instances of fraud and allows visitors to comment. Recently posted, there is a new twist: retraction of an article “in press,” meaning that its final version was not yet available and that it had not been assigned space (issue, pages) in the journal.10 The implication is that fraud is occurring and being detected even at the preliminary submission stage. As in many other cases, the reason for this retraction was opaque and listed as “article withdrawn at the request of the authors and editor.”
However, all of this should not come as a surprise. An article by John Ioannidis, a meta-researcher who specializes in this sort of thing, states that 80% of nonrandomized studies (the most common type published) are eventually proved wrong as are 25% of randomized trials and 10% of large multi-institutional randomized ones.11 He has been able to identify the characteristics of articles that make them more likely to contain false information: small populations, small overall effect on science, financial interests, and a “hotter” field, among others. All of these features contribute to eventual retractions. Dr Ioannidis said, “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded.”12
As shown in 2 cases just last year, the retraction (and fraud) epidemic continues and is growing. The first case involved the dean of the School of Social and Behavioral Sciences at Tilburg University in the Netherlands.13 His studies involved the effects of trash-ridden environments and eating meat on behavior, and some were published in Science. Dr Stapel's deception was driven (according to him) by his “quest for beauty—instead of truth.” People like him, obsessed by order and symmetry, have difficulty accepting the often messy results of research. A university committee concluded that 55 of his articles were fraudulent, and he is now being investigated for misuse of public funds given to him in the form of grants.
In the second case, Dr Yoshitaka Fujii, a researcher and anesthesiologist from the University of Tsukuba in Japan went on for years publishing fraudulent articles.14 The first allegations of fraud came in 2000, and by 2012, a panel of investigators had concluded that he had been publishing falsified data since 1993. In April 2012, twenty-three journals publicly requested that the Japanese Society of Anesthesiology investigate Dr Fujii. By June, a commission had found that 172 of his articles contained some fabricated data, and of these, 126 were “totally fabricated.”
This last example is the largest case of scientific fraud to date, and I am sure that, unfortunately, it will not be the last. In 1 survey, 2% of academics admitted to falsifying or fabricating data and 28% claimed to know colleagues who had done it.15
*The Scholarly Kitchen is a moderated blog established by the Society for Scholarly Publishing to “advance communication through education and networking.” It is a must for anyone interested in scientific publication.
REFERENCES
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- © 2014 by American Journal of Neuroradiology