Publishing negative results is more important than ever as we try to train machines in chemistry
We have a negative data problem in chemistry, in drug discovery, and in science in general. By that I mean reporting the results of experiments that didn’t work. One immediate response to that idea has always been ‘Who wants to hear about those?’, but it’s more people than you might think. And now it is becoming apparent that it’s not just people who need to hear about these things.
There’s always been an argument for reporting such results just from intellectual honesty. After all, there has never been a research project in the history of science that went smoothly and perfectly from success to success, with everyone immediately understanding the lessons of each experiment and moving on to the next triumph. That only happens in old, ‘Edisoniade’ type pulp-magazine stories, where wildly competent and creative inventors advance the plot (and defeat their sinister enemies!) through one amazing new discovery after another. Real life as a scientist is conspicuously lacking in this sort of experience. Honestly, I’ve never even had any sinister enemies to defeat, for one thing.
There has never been a research project that went smoothly and perfectly from success to success, with everyone immediately understanding the lessons of each experiment and moving on to the next triumph
No, real research projects take wrong turns, have results that are difficult or impossible to reproduce, and can seem in retrospect to have taken far more time than anyone first imagined. It’s understandable that people don’t really want to highlight this sort of thing in a journal publication, but the unfortunate result is that we make our work sound uncomfortably close to those 1930s science fiction stories. If you try too hard to avoid looking embarrassed, you can end up looking ridiculous. It’s true that ‘as fate would have it, our final clinical candidate was only different from the starting compound by one methyl group’ does not make for an inspiring tale (although I have had that exact experience!) But every field can generate stories like this. It’s not as if you only got around to that result at the very end of the project – more likely, this happens when you generate that good result early on and spend months finding out that you can’t seem to improve on it, no matter what else you try. This is no disgrace!
Even for projects that took a less teeth-grinding path to success, it still may feel a bit strange to lengthen a manuscript with lists of the inactive compounds or experiments that were abandoned. As a practical matter. you can run into trouble with journal page length requirements if you try this, although the supplementary material would be a great place for them. I have always appreciated at least a mention of ‘Despite numerous attempts…’ or ‘Even after extensive experimentation…’, because that makes me trust the rest of the paper even more. But we need to go further.
Intellectual honesty aside (I’ve always wanted to start a paragraph that way!) there is a very sound scientific reason to keep the negative results visible and in detail. They really do have value, and that value has only become more obvious with the advent of machine learning (ML) techniques. The development of a good ML model absolutely requires negative results, and they need to be generated at the same level of rigour as the positive ones. Indeed, one of the big problems in applying such techniques to the existing scientific literature is the systematic omission of failed experiments. People worry (and rightly so) about the amount of the literature that can’t be reproduced, but loss of the work that never saw publication at all, that was deliberately pruned to make everything look nicer, is something that many of us are only now starting to appreciate.
There have been attempts to start data repositories or even actual journals that emphasise negative results, but to the best of my knowledge these have always failed. Instead of quarantining them on their own scientific island, I think a better approach would be to get real and accept the failures as a natural part of science. Now that we’re publishing everything electronically, I would like an editorial control that flags every manuscript with a negative result content below a given threshold and sends it back to the researchers for a dose of reality. It’ll be good for us.
No comments yet