Editorial Notfall Rettungsmed 2011 · 14:613–615 DOI 10.1007/s10049-011-1543-8 Online publiziert: 30. Oktober 2011 © Springer-Verlag 2011
E. Wager Committee on Publication Ethics, Sideview, Princes Risborough, UK
How journals can prevent, detect and respond to misconduct Science journals form an important part of the research record; therefore their editors have a duty to safeguard the integrity of the material they publish. This is especially true in medical publications where unreliable research reports could harm patients. This problem was recently demonstrated in the case of Potti et al. They published a paper in Nature Medicine in 2006 [1] describing a technique for predicting patients’ responses to various chemotherapy agents. This technique was then used by other researchers to determine what type of chemotherapy cancer patients received in a clinical trial. However, while the trial was underway, a statistical analysis (by Baggerly et al.) indicated that Potti’s research was unreliable and the trial was stopped [2]. The Potti et al. publication was later retracted (after the work was shown to be fraudulent) but it probably resulted in patients receiving suboptimal treatments which may have reduced their chances of survival. Even more recently, the retraction of 88 papers by German anaesthetist Joachim Boldt, who had failed to obtain ethics committee approval for his research, and subsequent (on-going) investigations into other types of misconduct relating to his work, has raised concerns for patient safety which have been reported not only in medical journals but also newspapers [3, 4, 5]. These high profile retractions, while demonstrating the limitations of conventional prepublication peer review to detect fraudulent or unethical research, paradoxically also demonstrate the strength of the publishing system when editors take their responsibilities seriously, do the right
thing, and retract unreliable articles. Sadly, we know that not all editors perform so well. A survey of retractions indexed in Medline showed that journal practices are inconsistent [6] and there is anecdotal evi dence of editors failing to retract papers in the face of clear evidence that they are fraudulent [7]. Cases presented to COPE (the Committee on Publication Ethics) also suggest that editors are sometimes reluctant to retract articles. Reasons for such reluctance probably vary but may include concerns about litigation or about the effects a retraction might have on the reputation of a journal [8]. Although editors may feel that a retraction indicates a failure of their peer review process, organizations such as COPE view them as a mark that a journal is taking its responsibility seriously and consider failure to correct or retract unreliable articles to be a far more serious problem [9]. In fact, the most prestigious journals appear to retract articles more often than lower-ranking ones [10], suggesting that editors’ fears for the reputation of their journal are unfounded. But, as in medicine, prevention is better than a cure, so editors (and publishers) also need to work to prevent and detect potential misconduct. Journals can play a useful role in educating researchers and potential authors about good practice. Journal policies and guidelines should also inform peer reviewers and editors about their responsibilities and what is expected of them. While most scientists would never consider committing the grosser types of misconduct such as data fabrication, some lesser problems may arise through ignorance rather than mal-
ice. The conventions of allocating authorship of scientific papers are open to interpretation and require judgement in their application. They may even vary between disciplines—for example alphabetical listing is common in mathematics, while physics papers list every person who contributed to a project which may result in several hundred authors. It would therefore be helpful for journals to offer guidance about authorship, yet a survey of over 200 medical journals’ instructions for authors found that 41% did not [11]. Another area where researchers may need guidance is the preparation of figures, especially when images are a crucial part of the findings. Widely available software programs enable researchers to alter digital images with remarkable ease. Authors may therefore be tempted to ‘clean’ or ‘beautify’ images with no intention of deceit, but may also be tempted to manipulate pictures to misrepresent the findings. Some journals in disciplines such as cell biology and radiography, in which images are especially important, have therefore developed guidance on the preparation of figures, setting out the types of alteration that are acceptable from those that are not [12, 13]. The development of image manipulation standards also provides a good example of how journals can go one step further than simply guiding authors, and can actively screen for unacceptable behaviour. When the Journal of Cell Biology started to screen for inappropriate image manipulation they found this in 1% of submissions [12]. Several journals now routinely screen for image manipulation. Notfall + Rettungsmedizin 8 · 2011
| 613
Editorial Text-matching software is also increasingly used by journals to screen for plagiarism and redundant publication. Thanks to collaboration between a number of publishers, the anti-plagiarism software iThenticate™ has been coupled with a database of published articles to create the CrossCheck tool [14]. This allows editors to search for text from a much wider and more relevant database than simply searching for material that is freely available over the Internet (which may be adequate for detecting student plagiarism). Many journals now screen manuscripts routinely (either on submission or just before acceptance) and believe this will not only detect problems with plagiarised or recycled text before publication but may also deter authors from committing these offences in future [15]. In some disciplines, it is also possible to check certain types of data automatically. A series of fraudulent chemistry papers were identified after other researchers checked the proposed structures using specialized software and found they were physically impossible [16]. Journals therefore have tools available for detecting various types of misconduct but all carry costs, either directly or in terms of the time needed to use them or the additional burdens they place on researchers. Therefore editors and publishers need to decide when and where to apply such tools. The benefits of prepublication screening must be weighed against the costs both directly to the journal and in terms of delays and extra work for authors. Just as with diagnostic tests, the utility of screening tools will depend on the frequency and severity of the problems they are designed to detect and also on the tool’s sensitivity and specificity. If serious problems are common, it obviously makes sense to screen, but most serious forms of misconduct are believed to occur only rarely—although when editors start to look for them systematically they usually find they are more common than they had first thought. Evidence for this comes from journals that have instigated screening for image manipulation and those that use text-matching software to detect plagiarism and redundant publication—both techniques have resulted in an increase in such cases being presented to
614 |
Notfall + Rettungsmedizin 8 · 2011
COPE. For very rare problems, it may be hard to justify the cost of routine prepublication screening and journals may be justified in using the available tools only on manuscripts that appear to be at high risk or in which problems have already been identified. Similarly, if problems are considered to do little harm (i.e. are of low severity), it may be hard to justify screening. Conventional prepublication peer review is not very effective at detecting misconduct; however it can sometimes play a role. Alert reviewers may spot plagiarism (especially of their own work), duplicate publication or submission (for example if they are sent identical manuscripts by different journals), or fabrication (if results look “too good to be true”) but it is clear from the number of articles that have to be retracted that this cannot be relied on. While editors may be reluctant to overburden their reviewers, it may be helpful to remind them of their potential role in highlighting possible ethical problems and COPE recommends this as Best Practice [17]. If misconduct is suspected, specialist reviewers may be called in, for example statisticians [18]. However, without access to the original data, their powers may be limited. Some commentators have called for the publication of raw data to reduce both fraud and honest errors. However, the technical issues of publishing clinical trial data have not yet been resolved and there are also concerns about patient confidentiality if raw data is posted onto public websites [19]. While cases of data fabrication and unethical research make the headlines, many other “questionable practices” and types of unacceptable behaviour appear less dramatic but occur more often. Authorship remains a difficult issue, and disputes are perhaps inevitable when academics face extreme pressures and sometimes financial incentives to publish. Although authorship abuse may appear a “victimless crime” that does not affect the validity of the research and therefore cannot harm patients, it may reflect the ‘culture’ in which research is done and is often a sign of other problems [20]. Another problem which can result from the pressure to publish and which may be viewed more seriously by editors than authors is multiple submission and
duplicate publication. As with authorship problems, such “indiscretions’ may appear, at least to authors, relatively minor compared with research fraud, but they do have consequences on the research community since they waste reviewers” (i.e. researchers’) time and may increase the cost of peer review [21]. Another “questionable practice” is failure by authors, reviewers or editors to disclose competing interests. Many journals now ask specifically about these but, in most cases, must rely on the honesty of the individuals concerned to disclose appropriately. Everybody involved in the publication of research—be they editors, reviewers, authors or publishers—carries responsibilities for ensuring the integrity of the process and of what is reported. Various guidelines are available including those from the International Committee of Medical Journal Editors [22] and COPE [17, 23, 24]. International good practice guidelines for both authors and editors have recently been developed at the World Conference on Research Integrity and will be published shortly (see www.wcri2010.org). Authors and reviewers need to understand what is expected of them and journals should have good systems in place for advising and educating authors as well as for detecting and responding to possible misconduct and, in particular, ensuring that a proper investigation takes place. As the COPE Code of Conduct notes “this is an onerous but important duty” [17].
Corresponding address E. Wager Committee on Publication Ethics, Sideview 19 Station Road, HP27 9DE Princes Risborough UK
[email protected] Conflict of interest. The corresponding author states the following: I am Chair of COPE (the Committee on Publication Ethics) and helped develop some of the guidelines mentioned in this paper. This is an unpaid position. I work as a freelance trainer and receive fees for running course on publication ethics.
References 1. Potti A, Dressman HK, Bild A, Riedel RF, Chan G, Sayer R et al. (2006) Genomic signatures to guide the use of chemotherapeutics. Nat Med 12:1294–1300
Fachnachrichten – in eigener Sache 2. Baggerly KA, Coombes KR (2009) Deriving chemosensitivity from cell lines: forensic bioinformatics and reproducible research in high-throughput biology. Ann Appl Stat 3:1309–1334 3. Dyer C (2011) The fraud squad. Br Med J 342:d4017 4. Schiermeier Q (2011) Research misconduct confirmed at German clinic – March 04, 2011. http:// blogs nature com/news/2011/03/research_misconduct_confirmed html. Accessed: 17 October 2011 5. Blake H (2011) Joachim Boldt profile: a glittering career built on charisma and charm. http://www telegraph co uk/health/healthnews/8360678/ Joachim-Boldt-profile-a-glittering-career-built-oncharisma-and-charm html. Accessed:17 October 2011 6. Wager E, Williams P (2011) Why and how do journals retract articles? An analysis of Medline retractions 1988–2008. J Med Ethics 37:567–570 7. Sox HC, Rennie D (2006) Research misconduct, retraction, and cleansing the medical literature. Lessons from the Poehlman case. Ann Intern Med 144:609–613 8. Williams P, Wager E (2011) Exploring why and how journal editors retract articles: findings from a qualitative study. Sci Eng Ethics, doi:10.1007/s11948011-9292-0 9. Wager E, Barbour V, Yentis S, Kleinert S (2009) Retractions: guidance from the Committee on Publication Ethics (COPE). Croatian Medical Journal 50:532–535 10. Liu SV (2006) Top journal’s top retraction rates. Science Ethics 1:91–93 11. Wager E (2007) Do medical journals provide clear and consistent guidelines on authorship? Medscape General Medicine 9(3):16 12. Rossner M (2006) How to guard against image fraud. The Scientist 20:24–25 13. Rossner M, Yamada KM (2004) What’s in a picture? The temptation of image manipulation. J Cell Biol 166(1):11–15 14. CrossCheck. http://www.crossref.org/crosscheck/ index.html. Accessed: 17 October 2011 15. Kleinert S (2011) Checking for plagiarism, duplicate publication, and text recycling. Lancet 377:281– 282 16. Harrison WTA, Simpson J, Weil M (2010) Editorial. Acta Crystallogr E66:e1–2 17. COPE: Code of conduct and best practice guidelines for journal editors. http://www publicationethics org/resources/guidelines. Accessed: 17 October 2011 18. Al-Marzouki S, Evans S, Marshall T, Roberts I (2005) Are these data real? Statistical methods for the detection of data fabrication in clinical trials. Br Med J 331:267–270 19. Boulton G, Rawlins M, Vallance P, Walport M (2011) Science as a public enterprise: the case for open data. Lancet 377:1633–1634 20. Wager E (2009) Recognition, reward and responsibility: why the authorship of scientific papers matters. Maturitas 62:109–112 21. Wager E (2010) Why you should not submit your work to more than one journal at a time. AJTCAM 7:160–161 22. International Committee of Medical Journal Editors (ICMJE) Uniform requirements for manuscripts submitted to biomedical journals. www icmje org/. Accessed: 18 October 2011 23. COPE: Code of Conduct for Journal Publishers. http://www publicationethics org/files/Code%20 of%20conduct%20for%20publishers%20FINAL_1 pdf. Accessed: 18 October 2011 24. COPE flowcharts. http://publicationethics org/resources/flowcharts. Accessed: 18 October 2011
Galenus-vonPergamon-Preis 2011 Springer Medizin zeichnet exzellente Forschung und ehrenamtliches Engagement aus Auch in diesem Jahr hat Springer Medizin den von der Ärzte Zeitung Verlags GmbH gestifteten Galenus-von-Pergamon-Preis vergeben und würdigt damit exzellente Forschung in Deutschland. 12 unabhängige Experten haben im Oktober 2011 über die Preisträger in den Kategorien Primary Care, Specialist Care und Grundlagenforschung entschieden. Als Schirmherrin des Galenus-von-PergamonPreises lobte die Bundesministerin für Bildung und Forschung, Frau Professor Dr. Annette Schavan, die herausragenden Leistungen aller Forscher-Teams. Primary Care Der Preis in der Kategorie Primary Care würdigt ein Medikament, das bei einer breiten Patientengruppe eingesetzt wird. In diesem Jahr hat Amgen/GlaxoSmithKline (GSK) den Preis für Prolia® erhalten. Mit Prolia® steht erstmals ein monoklonaler Antikörper zur gezielten Osteoporosetherapie zur Verfügung. Der Antiköper hemmt die Knochenresorption und schützt somit Frauen in der Postmenopause und Männer mit Prostatakrebs vor Frakturen. Specialist Care Der Preis in der Kategorie Specialist Care zeichnet ein Medikament aus, das zur Behandlung seltener Erkrankungen verwendet wird. In diesem Jahr ist Amgen der Gewinner für Nplate®, das erste zugelassene Medikament zur Stimulierung der Thrombozytenbildung. Es ist indiziert zur Behandlung von Erwachsenen mit einer chronischen Immun-(idiopathischen)thrombozytopenischen Purpura (ITP), die auf andere Therapien nicht ansprechen.
Die Preisträger in diesen beiden Kategorien erhielten jeweils eine Medaille und eine Urkunde. Grundlagenforschung In dieser Kategorie wird ein Bewerber prämiert, der eine herausragende wissenschaftliche Arbeit in der pharmakologischen Grundlagenforschung eingereicht hat. Die Auszeichnung ging an Professor Wolfgang Kühn aus Freiburg. Er und sein Team haben sich in der Forschung um die Entschlüsselung molekularer Mechanismen der autosomal dominanten polyzystischen Nierenerkrankung (ADPKD) verdient gemacht. Der Preisträger erhält zur Medaille und Urkunde zusätzlich ein Preisgeld in Höhe von 10.000 Euro. CharityAward 2011 Mit dem im Jahr 2009 erstmals gestifteten CharityAward zeichnet Springer Medizin jährlich Menschen und Organisationen aus, die sich ehrenamtlich um behinderte, kranke und hilfsbedürftige Menschen verdient gemacht haben. Wer Gewinner wird, entscheiden die Leser der Medien von Springer Medizin. Preisträgerin 2011 ist die in Peru geborene Ärztin Jenny De la Torre: Sie wird für ihren inzwischen 16-jährigen Einsatz für Obdachlose in Berlin-Mitte ausgezeichnet. Im Rahmen dieses Engagements hat sie eine Stiftung gegründet und das Berliner „Gesundheitszentrum für Obdachlose“ aufgebaut. Der Award besteht aus einem Medienpaket in einem Wert von 100.000 Euro und einem Barscheck über 50.000 Euro. Schirmherr des Preises ist Bundesgesundheitsminister Daniel Bahr. Quelle: Springer Medizin
Notfall + Rettungsmedizin 8 · 2011 |
615