Following the release of the long-awaited ninth ranking of higher education institutes (HEI) in India in August, concerns have been raised about the reliability of the government framework. This comes after research policy watchers highlighted inconsistencies and institutions playing the system to rise up the rankings. HEIs that game the system often create the illusion of offering a far better education to prospective students, who depend on the ranking to choose a good university, than they can truly deliver.

Global rankings of HEIs, such as the QS World University Rankings and the Times Higher Education World University Ranking, have been around for many years, but didn’t specialise in Indian HEIs when the ranking was created. As a result the National Institutional Ranking Framework (NIRF) for HEIs – that rates more than 10,000 institutions – was created by the Indian government in 2015, with the objective of developing a robust and data-driven ranking to evaluate Indian institutions.

NIRF assigns rankings to institutions based on five criteria with different weightings for each: teaching, learning and resources (30%); research and professional practice (30%); graduation outcomes (20%); outreach and inclusivity (10%); and perception (10%).

V Ramgopal Rao and Abhishek Singh, both at the Birla Institute of Technology and Science in India, recently highlighted inconsistencies in the NIRF. The study analysed the rankings of the top 100 institutions over two consecutive years (2022–2023) and found that the top 20 institutes were stable, but below them there were huge fluctuations.

Gaming the system

All five of the parameters used for scoring HEIs are open to manipulation. But the easiest to game and most difficult to detect is manipulating publications and citations, part of the research and professional practices criterion, says Moumita Koley, a senior research analyst at the DST Centre for Policy Research at the Indian Institute of Science, Bangalore.

She stresses that ‘manipulation’ doesn’t mean falsifying research metrics, which would be difficult as NIRF uses data from Scopus and Web of Science. Instead, institutions indirectly manipulate these metrics by promoting the publication of papers, regardless of quality, further incentivising a ‘publish or perish’ culture. NIRF’s heavy reliance on bibliometrics as a proxy for research output and impact makes it vulnerable to abuse by authors and institutions looking to rise up the rankings.

Indian students

Source: © Shutterstock

Inconsistencies in the NIRF can cause prospective students to be misled about the quality of education they can expect at some institutions

In 2023 and 2024, Saveetha Dental College in Chennai was the highest-scoring dental institute in NIRF. This, however, wasn’t the whole picture. An investigation carried out by Science in collaboration with Retraction Watch charged that the institute was involved in academic publishing malpractice. Papers written by undergraduates and faculty members were designed to be cited in other papers written by Saveetha authors. This industrial scale self-citation did not escape the notice of one researcher working at the institute who was left wondering why papers written by him were gaining hundreds of citations since joining the institute.

India Research Watchdog (IRW), a group of volunteers dedicated to eliminating academic misconduct in India, has been monitoring citation malpractice. Over years of skimming through social media sites like X and Telegram, they discovered that paper mills – businesses that produce and sell manuscripts that resemble genuine ones but are fraudulent – offer citation booster services. This allows clients to buy the number of times their paper get cited.

Citation cartels

Achal Agrawal, founder of IRW, noticed that review papers on topics that were already explored or added little to the existing literature had over 100 citations. ‘So, initially, I thought it must be self-citations but it was not self-citing. It was citations from all over the world.’ Paper mills have clients from across the globe and this means they can create a web of papers that cite each other. This allows them to boost citation numbers artificially without risking self-citation, explains Agrawal.

Citations aren’t the only metrics being gamed. To inflate an institute’s publications some researchers have resorted to purchasing authorships from paper mills too.

Anonymous sources have told IRW that researchers are being threatened with being forced to resign or fired if they don’t publish a certain number of papers per year. One researcher who was forced to resign had been an academic for over a decade.

Rao says that, in addition to research metrics, peer perception can play a pivotal role in deciding the rank of an institution. Perception is a subjective parameter dependent on the opinion of peers and can be influenced by factors such as historical reputation, publicity and other non-academic factors. NIRF collects perception data using surveys or assessments from educators and employers, which introduces the risk of biases and manipulation. Improving transparency in criteria associated with data collection could reduce scepticism about the integrity of the parameter.

‘Students, who are the primary consumers of these rankings, should be in a position to choose the right institution by referring to these rankings…. Perception of an institution takes a lot of time to build and is influenced by factors such as high brand equity, large network of alumni, research and academic infrastructure, good placements and high-quality intake,’ Rao says. ‘Thus, we feel that a survey-based approach, which is focused on certain predefined questions, will not be sufficient enough to capture the legacy of very old legacy institutions.’

Koley suggests improving NIRF by periodically reviewing it to understand if – and where – gaming is happening. ‘Right now, so many people are interested in NIRF, so many are extensively working with the data. They should involve this kind of young dynamic people during the consultation and just take their input. Some academics do these things [academic ethics watchdogs] voluntarily because they feel that’s their academic responsibility. They should always engage with such people,’ she adds.