There’s a fine line between trusting scientific literature and relying too much on it
Many times I’ve seen a scientist, especially at the beginning of their career, choose to follow a dodgy but published procedure when an unpublished alternative is going to be better. It can be quite frustrating when you’re the one trying to convince them that a method based solely on your prior experience will be better than the one they see before them in a reputable journal.
A classic example is trying to perform a more sensitive cross-coupling with conditions almost certainly designed for a robust, run-of-the-mill Suzuki. You might have found the closest literature example to your substrate, but the likely truth is that the author – not being a catalysis expert nor too bothered about their yield – just chose the same conditions they used for their last cross-coupling. Chemical synthesis is central to many other fields – so just because something is published, doesn’t mean it was optimised! (I may also have made the mistake of going for the closest precedent rather than the most relevant conditions in my early career too, but I don’t remember as it seemed sensible at the time.)
The best approach is to run both bonkers and staid conditions in parallel
It would be great to try and sow seeds of thinking differently early on, although it’s a difficult line to walk. Students necessarily learn to follow authoritative textbooks and lecture notes, and it’s mostly counterproductive for them to come up with wacky alternatives instead of tried and true answers. To some extent, the same continues after graduation. The first ‘constructive’ feedback I received in an industry job was that I ran too much unusual chemistry, trying to divine ideal conditions for my substrates before the product had ever been made. While I’m still not sure whether my approach did decrease the failure rate of my reactions, the learning opportunities around less common conditions – and the fun of applying them – make me glad that I later moved to high-throughput chemistry. There, the best approach is always to run both bonkers and staid conditions in parallel to discover what will work best.
Our brains are constantly trying to map out which sources of information we can trust the most. In the world of academic research, supervisors famously contradict themselves continually, and different researchers might give contrasting advice. We notice this in peer review as well. An early paper I wrote left me baffled by an experience that will be familiar to many readers: the first reviewer was delighted by the work and asked that it be published immediately ‘as a service to the community’, while reviewer 2 requested major corrections, including months’ more experimentation. However, these divergent expert views can both be right. While we were happy with the original work, the extra series of experiments bolstered the paper’s conclusions.
We still need to test unusual and unexplored paths
It’s understandable, and desirable, that scientists are sceptical until we see the evidence. Nullius in verba, states the motto of the Royal Society – ‘don’t take anybody’s word for it’. The non-science world we emerge from is filled with untrustworthy information sources and authoritative-sounding statements are pushed on us seemingly from birth, whether it’s from politics or the beauty industry, for example. These can turn out to be from sources whose expertise is mostly in accruing power or making off with customers’ cash.
In public health, not being able to consistently trust authority figures can change lives. This became very clear during the gravest depths of the Covid-19 pandemic, where various governments’ advice differed substantially from that of virus experts. Citizens trying to protect themselves were left confused. Certain areas of the media even seek to ridicule expertise by extracting only the most bizarre results or seemingly trivial findings from studies, when in fact the original work was more meaningful.
Nowadays, I know to trust reputable journals like Chemical Science more than any UK newspaper on science, but to people who are not yet regular journal readers, the magnitude of the difference is not clear, nor is it initially obvious which kinds of papers are most likely to be reliable.
Perhaps, once we are established enough to have worked through all this, there arises an opposite problem. It’s possible that a well-honed sense of expertise and trust in the ‘right’ sources could lead to a false sense of security that stops us trying new things. We need to remember that although it’s easier to find the answers once you’ve been in the field for a while, we still need to test unusual and unexplored paths. That way, discovery happens.
No comments yet