Our cognitive biases can make it difficult to choose what’s best for science
Here’s a sentence I’ve heard a lot: ‘every liquid handler needs a lot of maintenance’. It’s not true! In fact, the people I have heard say this are those who have only used one kind of liquid handling robot – the type that does require a lot of tender, loving care. Other statements on the theme include ‘liquid handlers can’t do organic solvents’ (they absolutely can) and ‘I’m worried about getting another liquid handler as they need specialist programming skills’ (they do not, necessarily).
When we get used to a problem like the needy robot, we seem to justify its shortcomings as a necessity for the features. Perhaps it’s a kind of psychological inertia. When buying equipment I might be steered towards a particular niche, such as a bot that can heat, stir and sample with needles and hence is more specialised than one that simply dispenses liquids. Then because I had put in a very complex capital expenditure justification to buy this fancy bot, I will decide that my specific model chosen constitutes the best possible solution. That would probably be correct – I likely did choose the best options within the situation’s constraints.
However, humans have a curious tendency to extrapolate this result outside its domain of applicability. When it comes time to buy a second robot, or advise another group on purchasing, the backs of our minds are already primed with detailed knowledge on what worked earlier. After making a choice, people are consistently more likely to lock in on similar solutions for a totally different application, even when they are no longer the best fit.
The effort spent on a new way of working increases our mental impetus. Before I had a liquid handler, I spent zero time programming and maintaining the bot. If after my purchasing decision that now takes up a lot of my time, I might retroactively justify this by telling myself that extensive maintenance is a necessary cost for any benefits I enjoy. It’s a relief to think that the effort is worth it. If I’m not careful, the idea of hassle-free simplicity becomes something that threatens my mental model of lab automation.
This self-comforting justification is more important to us with one of a kind, expensive equipment: after all, I’ve never heard logical errors being made over cheap consumables. The higher the stakes, the larger our brains’ motivation to retroactively over-justify decisions. And lastly, it can also come back to our self-worth: ‘that improved method looks too far from my skill set, therefore I can’t do it’.
To improve our labs, we must repeatedly push ourselves out of our comfort zones
Of course, all of these are cognitive biases. Liquid handling robots have existed for decades. Some need next to no setup and maintenance, several can handle pretty much any organic solvent, and many require zero programming skill. And decision bias is by no means true only of robots. Which trade-offs you make when selecting equipment or systems depend on your exact use case, and it’s generally easier to find any tool to do a repetitive, simple task all day than the more flexible systems we sometimes need in research. The trade-offs aren’t so worrying: they refer to technological, physical or cost limits. But the cognitive biases themselves are much more concerning from a scientific point of view – potentially affecting whether labs progress or stagnate.
I once worked in a place where a common phrase to relieve any complaint was ‘it’s always like this in big companies’. The only problem was that I never heard anyone say it who had actually worked at any other big company. Untrue assertions proliferate quickly because science is full of highly intelligent people who absorb information from those around them and regurgitate it in relevant situations. When the information is a lesser-known heterocycle synthesis, that’s invaluable. However, it also means that scientists less experienced in the issue at hand become unwitting conduits to spread bias.
It’s very human to make logical errors, and being scientists does not preclude us: at times it could even cloud our ability to recognise them. What I’ve noticed does help is having an outsider come in. A newbie free of the emotional baggage around our lab’s history can point out quick wins and obvious flaws that were hidden to us – if we choose to listen!
As tempting as the comfort of familiarity might be, we need to continually reject irrational arguments, for the sake of the progress of science. To improve our own labs, we must repeatedly push ourselves out of our comfort zones. And when we dare to tread beyond what we already know, we might even enjoy it!
No comments yet