Survey highlights difference between paper metrics and the actual significance of research
The studies chemistry researchers perceive to be the most important in their field are not always the ones that get cited the most, according to a new survey.
An interdisciplinary US team examined how accurately a batch of over 350 chemists could estimate the impact of papers published in a 2003 issue of the Journal of the American Chemical Society (JACS). It found that the importance of a research article is only partly represented by the number of citations it accumulates.
In 2013, the study authors asked researchers from many subdisciplines of chemistry to pick up to three of what they judged to be the most ‘significant’ JACS papers out of 52 published a decade ago. The survey also asked respondents to identify up to three papers they thought were the most highly cited, up to three they would point out to other chemists and up to three they would share with anyone.
There seems to be a disconnect between citation and actual humanly perceived significance.
Rachel Borchardt
The respondents’ guesses of how many times an article might have been cited correlated strongly with the actual number of citations. But researchers did overestimate and underestimate the figures, which, the survey’s authors say, means that it’s difficult to judge predictions of citation counts.
Papers that were thought to be the highest cited or most shared with chemists by the respondents were selected from their subdiscipline 70% of the time. Those that were perceived to be significant were chosen from the same subdiscipline 63% of the time.
’Chemists are distinguishing between citations and significance, even though metrics like h-index treat them as one,’ says Rachel Borchardt, a science librarian at the American University in Washington, DC, who worked on the study.
Measuring impact
The corresponding authors’ h-index, a measure of scholars’ productivity and the impact of their papers, outperformed all four of the survey questions in predicting how much a given paper will eventually be cited, Borchardt says. Although the h-index (and related metrics) have attracted much criticism over the years, they are still be useful in predicting the top-end researchers, she adds.
Borchardt says it’s possible that in addition to author names, flashy findings and hot topics may have swayed how respondents rated papers.
One limitation of the study, the authors say, is that the authors self-selected the respondents through ‘online chemistry communication channels’ and they are therefore not necessarily representative of whole chemistry community. Another possible confounding factor is that the questions were not ordered randomly so it might be the case that the answer to one question influenced the answer to the next.
Stephen Davey, chief editor of Nature Reviews Chemistry in London, UK, told Chemistry World the results are perhaps not surprising, since other studies have previously shown that citations alone are a poor indicator of impact.
’I think it may be difficult for survey respondents to avoid the effect of hindsight in their evaluations of the articles,’ Davey says. ‘10 years down the road they may be aware of where some of the described results have led and be taking that into account subconsciously in their “predictions of impact”.’
‘Citation counts are really only measuring one thing,’ adds Borchardt. ‘There seems to be a disconnect between citation and actual humanly perceived significance.’
References
R Borchardt et al, PloS One, 2018, DOI: 10.1371/journal.pone.0194903
No comments yet