Misidentification of scanning electron microscopes (SEM) in peer-reviewed papers could be a sign that they were produced by paper mills, researchers have claimed in a preprint.

A greyscale image showing light waves on a dark background and includes a bar showing the SEM details and brand information

Source: © Reese AK Richardson, OSF Preprints 2024, CC-By Attribution 4.0 International

A study has revealed that in thousands of materials science engineering papers the metadata on scanning electron microscopy images doesn’t match the instrument listed by the article

According to the researchers, who are based at Northwestern University in the US, published research in materials science and engineering has, up until now, escaped questions about reliability and reproducibility that have recently hit other scientific fields.

However, users of post-publication peer review sites, such as PubPeer, have recently identified a significant number of articles where the make and model of the SEM listed in the text of the paper does not match the instrument’s metadata in images in the published article.

To get a sense of the scale of the problem in published studies in materials science and engineering, the researchers developed a semi-automated process to scan figures for the image metadata and check it against the SEM instrument identified in the text. They analysed over a million articles published since 2010 in 50 journals with impact factors ranging from 2.1 to 24.2, published by four different publishers (Elsevier, Frontiers Media, Springer Nature and PLoS). In all, they identified just over 11,000 images for which the SEM make and model could be identified from the metadata.

Poor practice or paper mill?

Overall, they found that for just over a fifth of these articles the image metadata did not match the SEM manufacturer or model and, for another quarter, some of the instruments used in the study were not reported. ‘We found out that in a very large fraction it did not match,’ says Luís Nunes Amaral, a chemist and biological engineer at Northwestern, who led the team. ‘It’s even more worrisome that there was another 20% that didn’t even bother to report in the paper which instrument they were using. And this, in terms of reproducible science, in terms of accountability is really, really poor practice.’

A segmented bar graph showing the number of SEM images with brandmarks increasing since 2010 from nearly zero to over two thousand in 2022.

Source: © Reese AK Richardson, OSF Preprints 2024, CC-By Attribution 4.0 International

Failure to identify the scanning electron microscope used in a paper was another frequent issue the authors identified, although this is more likely a sign of sloppy research than fraud

‘I’m not saying that all those 20% that do not report are fraudulent science, but they are definitely sloppy science – I have no respect for people that don’t do a good job and care for what they are doing.’

Among those articles that have misidentified instruments they recorded other issues, such as the presence of repeated authorship. The five most common authors of articles with misidentified SEMs authored between 13 and 42 problematic articles. Other problems identified included shared image watermarks and shared textual artefacts. The team said that these unexplained commonalities could be a sign of paper mill activity – businesses that write and sell authorship on fraudulent scientific manuscripts.

‘What we know with paper mills is that people just pick up images from here and there,’ says Amaral. ‘They pick up random text and they change little things to make it past plagiarism detectors and stuff like that. But these are not careful forgeries and so these papers are full of other issues … you find images that are duplicated. You find authors in collaborations that don’t make any sense, you get claims that the experiment were done in Iran, and there is no Iranian author. It’s completely absurd stuff that all seeps into these, which makes it even more strange that these things get in print.’

Amaral says the issue pervades science at all levels. ‘We are having new students that are being raised as scientists in this environment of everything goes … you feel compelled to engage in unethical behaviour, because you feel it’s the only way that you survive in science – and that is a terrifying thought,’ he adds.

‘We need to face this problem – we need to acknowledge [it]. We need there to be consequences. And we also need a revaluation of what we are doing as scientists.’

Weird errors and templates

Elisabeth Bik, a microbiologist and scientific integrity expert, says that although she has encountered the issue of misidentification of SEM instruments before she has never been able to carry out a systematic analysis on the scale the Northwestern researchers achieved. ‘We see these papers with SEM photos that have all kinds of other problems where there are some graphs that have repetitive elements, or photos that are overlapping, or photos that have been photoshopped … it has become an extra thing to look out for in my own tool set.’

240827_sem_images-(1)-8

Source: © Reese AK Richardson, OSF Preprints 2024, CC-By Attribution 4.0 International

Problem articles were far more likely to come from countries that already have a documented problem with paper mills

She agrees with the researchers that the error could indicate a paper mill at work. ‘It’s just a weird error and it is a sign of paper mills, because these paper mill papers are very hastily put together. They’re put together using a template usually, so they have some structure to them and once you start to analyse several of them you can see they all follow the same structure, the same language. And since they are hastily put together in an effort, presumably to churn out as many as possible and make the most money, they make small errors.’

Bik warns that paper mills are going to become harder to identify as methods of fabrication become more sophisticated. ‘We can only point out the tip of the iceberg – for every paper that we found all these errors … those are the dumb fraudsters because they leave traces for us to find. From now on, all the paper mills are going to add that as an extra check mark to make sure that they don’t make that mistake. So it’s going to be much harder to catch the next generation of paper mill papers.’

‘With generative AI we’re all going to be screwed in the sense that it’s going to be much harder to even see that these images are fake,’ she adds.

Bik says that journal publishers need to do a better job of screening papers for all these signs of paper milling. ‘Publishers have access to extra information we don’t have – is it one person submitting all these papers? Are there cover letters written all in the same language? We don’t have access to that data. They should be much more vigilant and strict in screening papers.’