Artificial intelligence is superior to humans at numerous tasks, but it is still vulnerable to human biases
When I started working at Chemistry World almost a decade ago, I had barely heard of artificial intelligence. Now, every third or fourth research story that crosses my desk has some sort of AI aspect to it. These recent stories are a good case in point. It’s left me wondering how long until machine learning features in most, if not all, research studies.
The question then becomes, how long until papers list an AI as an author? AI can write text, optimise reaction conditions and suggest synthetic routes. Guidelines for those submitting papers to any of the Royal Society of Chemistry’s journals state ‘Everyone who made a significant contribution to the conception, design or implementation of the work should be listed as co-authors.’
The patent world is already having this debate. Stephen Thaler claims his AI system, named Dabus, was the sole inventor of two new discoveries – a fractal container and a neural flame. Thaler has filed patent applications listing Dabus as the inventor around the world. His applications in the UK, the EU, the US, New Zealand, Taiwan, India, Korea, Israel and Australia have all been rejected. But in 2021, South Africa’s patent office made history by being the first to grant a patent listing an AI system, not a person, as the inventor. Interestingly, most jurisdictions don’t seem to dispute that Dabus was the inventor. But since their laws include phrasing stating the inventor should be a ‘natural person’ then Dabus clearly doesn’t fit the bill. My dictionary defines ‘everyone’ as ‘every person’, so academic journals would probably roll out a similar argument.
AI not being human, and especially not falling foul of our inherent biases, is frequently cited as one of its benefits. But if AIs are trained on data created and compiled by humans, which has a strong systemic bias toward positive results, then they won’t be immune from our preconceived ideas about how the chemistry we understand tends to work. A culture that values publishing negative or inconclusive results will empower AI to use failure to predict success.
AI is clearly superior to humans at numerous tasks. It can expose findings that would not otherwise follow from established knowledge. It is also incredibly useful for extracting insights from large data sets, as well automating repetitive tasks. But let’s not forget that research using AI is only as good as the quality of the data that it’s trained on.
No comments yet