A colleague recently told me: “NIRF rankings are coming up. We will have a gala event, like last year, with tremendous publicity. These rankings are becoming our Oscars.” His remark crystallised for me some of my concerns with the NIRF (National Institute of Ranking Framework) ranking of higher education institutions. I believe rankings should be replaced by ratings, and these should be done by an independent agency.

Rankings accentuate competition. They anoint champions of tennis tournaments, winners of beauty pageants, fastest athletes in track races. Competitors not only want to win, some also want others to do poorly. As they identify winners from a competitive choice set, rankings amplify zero-sum dynamics.

Academic contexts are not zero-sum settings. The best of academic institutions collaborate with their peers and build shared strengths. Ranking them competitively weakens the motivation to cooperate.

Rankings magnify the winner to the exclusion of everyone else. You might recall Moonlight won the Oscar for best film last month. How many of the eight other films nominated for best film can you recall? Because of asymmetric rewards, those being ranked go to extraordinary lengths to be highly ranked. This leads to two distortions in behaviour: focus on measurements rather than goals, and susceptibility to herd mentality.

Since rankings are high-powered rewards, those being evaluated focus on what is being measured even to the possible exclusion of what is important. When schoolteachers were rewarded based on students’ test scores, although the students’ test taking abilities improved, their creative abilities, social skills, and love for learning fell.

Rankings reduce performance on many dimensions to a single ordinal number. When all are being measured on the same metrics with the same weights, ranking-sensitive subjects eschew differentiation, and avoid innovation.

Some higher education institutions are tailoring their strategy narrowly; others might be gaming in their reporting, with the objective of improving their NIRF rankings. When an institution reports area dedicated to sports in square feet rather than square yards, is it due to an honest error, or desire to raise NIRF score? I heard the director of a reputed institution justify to their alumni focus on activities, not because of their institutional value, but because they would improve rankings.

Since rankings raise the stakes, evaluators must ensure that criteria and measurement are robust. Though National Board of Accreditation (NBA) is valiantly administering NIRF, the criteria remain fluid. For example, should research output of management institutions be recognised only if it is management research, or should it also include work done in core disciplines such as economics, statistics and psychology? One of the criteria used to evaluate management institutes in 2016 measured their commitment to Massively Open Online Courses (MOOCs). MOOCs are valuable to education in India. However, given their high fixed cost and increasing returns to scale, MOOCs should be offered from only a few platforms. Making it a ranking criterion influences all institutions to invest in MOOCs, risking socially inefficient investments in sub-scale offerings.

Evaluation systems must be built on broad consensus on appropriate criteria and their weightages, as well as audit capability to ensure submissions are correct. Whether this can be done by a body subsidiary to the HRD ministry is an open question.

Although it operates as an independent body, membership of various authorities of NBA suggests significant influence of the government, especially MHRD, on its operations. The shadow of NIRF rankings can induce overly compliant behaviour from academic institutions. Even if it is entirely aboveboard in its actions, MHRD’s ability to influence rankings might be seen as compromising the independence of academic administrators.

So, what’s the way forward? Evaluating higher education institutions and publicising how they stand on various criteria is useful. A reliable, independent evaluation is socially beneficial. The approach to evaluating academic institutions should be to rate, not rank them. Academic institutions should be measured against yardsticks, not against one another. Their evaluation should be akin to how hospitals, hotels, or professional bodies are measured, not how competitive sports events, pageant competitions, or horse races are conducted.

Ratings may create less buzz, but are more aligned to what the government is trying to achieve: encourage academic institutions to accomplish high quality standards and oversee the institutions in a way that differentiates between them based on quality.

To my knowledge, nowhere else in the world are rankings or ratings of academic institutions conducted by government bodies. To prevent politicking, and avoid even the appearance of conflict of interests, ratings should be done by a neutral, third party that is at a clear arm’s length from the Government, as also from the institutions being ranked, and is seen as capable in evaluation and audit.

This will only increase public trust in the ratings.

The writer is Director, IIM-Ahmedabad

comment COMMENT NOW