Everyone makes mistakes, says François-Xavier Coudert. But in science, everyone has to correct them too
Progress and understanding in science relies on faithful reports of our research – the scientific record. It is every scientist’s duty to add knowledge to this record, but also to safeguard its integrity by checking that others’ work is reproducible.
Yet, in a system where academics are driven to produce high-impact results, our priorities can become skewed. News stories and commentaries regularly highlight how intense pressure to pursue excellence can lead to misconduct (see, for example, the recent analysis by the French National Centre for Scientific Research’s ethics committee1).
Less directly, those same pressures can lead to errors (honest or otherwise) going undiscovered or uncorrected, rather than being expunged by science’s self-correcting mechanisms. Overall, as a scientific community, how good are we at maintaining the integrity of the scientific record?
Broken record
From the researcher’s point of view, correcting the scientific record seems to be littered with obstacles. The most obvious of these is sharing your correction · getting it published. This is firstly because the current publishing system places a heavier burden of proof upon those contradicting a published result than for the authors of the original research. Such reports are often required to provide not only a full analysis of the results, but also of the reasons why these differ from earlier works now considered authoritative. This makes it much harder to revisit or critique published papers, or share contradictory results, than it is to report a surprising new finding.
Knowledge comes in many forms – it is not limited to new discoveries
Even when a correction is not polemical, when it is simply factual, it can be difficult to publish because publishing is heavily weighted in favour of novelty. I recounted my own recent experience of these hurdles in a blog post.2 Having found invalid formulations, in a dozen unrelated papers, of a well-known physics law dating from the 1950s, we wrote a short paper on the topic as a pedagogical reference for others in the field. Though it was eventually published, the process was far from straightforward – reviewers, despite accepting the correction was valid, objected to its publication on the grounds that it was not ‘new physics’. Several colleagues shared similar experiences, few ending successfully. This requirement for novelty is editorial policy in many journals, but is in my opinion sometimes interpreted too narrowly.
Surprising new results are attractive (deep within, we all love it when science surprises us!), but sometimes an idea needs to be repeated before it is fully understood; and sometimes an old result put in a new context prompts an advance. A more useful principle could be ‘is this paper useful to the community?’ Knowledge comes in many forms; it is not limited solely to novel discoveries.
Journals also tend to be more conservative in publishing corrections and analyses of published papers, and follow-ups in general, than with original papers. While some journals such as Nature have a healthy policy of accepting work that refutes papers it has published, this is not general practice. The critique of striped nanoparticles is a good example of a paper that needed time to find a home, even though discussion through such papers is an inherent part of the scientific method.
Relegating these debates to journals with narrower audiences or lower impact does nothing to encourage researchers to critique and comment on earlier work.
A final barrier is that researchers’ career evaluations and funding are tied to their publication record. This means that reproducing and assessing earlier work may turn out to be a bad career choice simply because these activities are not funded or rewarded. So researchers are driven away from spending part of their time on boring but necessary studies, towards ‘sexy’ research that will yield papers, citations and grants. This trend, and the well-documented drop in publishing negative results over the past two decades,3 will harm scientific progress in the long term.
Beyond peer review
There are some positive signs. Researchers now have access to tools such as preprint servers and personal blogs that enable them to report negative results, comment on prior publications and refute earlier work more directly and immediately than the traditional peer-reviewed journal publication. These are certainly worthy means of publishing and disseminating such reports, but their main shortcoming is that valuable information becomes scattered across unconnected places and, in the case of blogs particularly, they also lack permanence.
Post-publication peer review repositories such as PubPeer are one solution to this problem, providing centralised and searchable online databases of comments on published results. But only time will tell if they get traction in the broad academic community.
We need to be careful that the pressure to meet short-term goals and the emphasis on excellence and novelty (and the lure of rewards that benefit the individual), do not prevent us fulfilling our duty to the community: safeguarding the integrity of the scientific record. This also requires ensuring a particular state of mind, recognising that mistakes happen and that correcting them is healthier than hiding them. But this may also mean changing the way we teach and evaluate our students!
François-Xavier Coudert is a researcher at CNRS, on Twitter as @fxcoudert
No comments yet