Getting the measure of researchers and their work has long been a contentious subject, but the issue is receiving more attention in light of several countries adopting evaluation systems that have come under scrutiny.
The latest country whose research evaluation system has been put in the spotlight is Brazil. Rodolfo Jaffé, a biologist at the Vale Institute of Technology in Belém, Brazil, says the introduction of journal ranking system Qualis in 2009 has lowered the standard of Brazilian science.
He explains that under this system, academics in Brazil get the same amount of credit for publishing in any journal on the approved list. This means that publishing in lower tier local journals is given the same weight as publishing in journals that are indexed in international databases like Scopus.
This incentivises academics to take the quicker route and publish in journals that are easier to get into rather than making the extra effort to publish in internationally recognised journals that often have a more rigorous peer review process, Jaffé says. What’s more, thousands of journals that are used to judge researchers under the Qualis system are not indexed in Scopus, he says.
Lower standards
Qualis lists thousands of journals published from within Brazil. Although no study has evaluated the quality of these journals, Jaffé suspects they have a lower threshold than those included in Scopus. As of now, it’s unclear whether the journals included in Qualis are predatory, or legitimate but of a lower standard.
One flaw of Qualis Jaffé points to is that different committees of researchers pick the journals that are included for each discipline with their own set of criteria, making them subjective and prone to bias.
Chemistry World reached out to Capes, the Brazillian governmental agency responsible for introducing Qualis, but didn’t get a response. A 2020 report released by a commission appointed by Capes, however, recommends that Qualis should no longer be used to evaluate graduate programmes in the next evaluation between 2021 and 2024.
‘Assessing the quality of research is extraordinarily difficult,’ notes Tom Welton, a chemist at Imperial College London and president of the Royal Society of Chemistry (RSC), which publishes Chemistry World.
For those judging research, Welton recommends they actually read the work rather than relying on metrics. Still, if metrics are used, he emphasises that those doing the judging understand what the metrics are actually measuring. Citation counts or altmetric ratings, for instance, may give a sense of the academic community’s response. ‘What [the metrics are] judging is not the quality of the paper, they’re judging the rate at which the community has responded to that paper.’
In reality, Welton says, some studies may be published for a decade or more before their worth comes to light. Those sorts of studies, which attract little attention for years before they start rapidly being cited by other researchers, are sometimes referred to as ‘sleeping beauty’ papers.
A 2019 analysis of sleeping beauties found they were on the increase until 1998, but have remained constant since then. The authors of that study suggest this plateau is a result of improved access to scientific literature globally thanks to OA initiatives. However, other commenters have proposed that the pressure to publish could also be a factor.
Gaming the system
Alberto Baccini, an economist at the University of Siena in Italy, says that people assessing research should be aware that the process can have an influence on academics’ behaviour. ‘For each research assessment, you can find some behaviour that changes in a way that is not desirable for society,’ he says. A 2019 study conducted by Baccini and colleagues found that researchers in Italy have been citing their own work or that authored by other researchers based at Italian institutions more frequently in response to a 2010 policy that is used to make decisions on promotions based on the number of citations researchers accumulate.
Italy is not alone. In 2018, researchers in Indonesia raised concerns that some of the nation’s researchers were gaming a new metric introduced by the country’s government to measure the productivity and performance of researchers, including by artificially inflating self-citations. Surya Dalimunthe, an international consultant at the Islamic University of North Sumatra and the State Islamic University of North Sumatra in Indonesia, and a vocal critic of the Indonesian system, says the main problem with research evaluation is that systems rely on external parties such as publishers and indexing databases. ‘The best metric is no metric at all,’ he says.
References
R Jaffé, bioRxiv, 2020, DOI: 10.1101/2020.07.05.188425
No comments yet