MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Saturday, 06 July 2024

Second look: Editorial on the problems with the National Institutional Ranking Framework

The methodology should be more transparent, feel some academics. Often it is not clear whether an institution has not been ranked because it is not good enough or because it did not participate

The Editorial Board Published 30.05.24, 08:17 AM
Representational image.

Representational image. File Photo.

When it was evolved, the National Institutional Ranking Framework was seen by many as a positive step. It would organise information about higher educational institutions for the benefit of students and academics, and infuse colleges and universities with a healthy spirit of competition. The NIRF was approved by the former human resource development ministry in 2015 and the Union ministry of education started producing annual reports from 2016. But certain problems in the programme have persisted since that time. Higher education is a sensitive field, and a country as vast and diverse as India makes ranking colleges and universities especially difficult. It must be asked, too, if institutions with special focus, universities and undergraduate colleges can be ranked according to the same parameters. Can an engineering college and an arts and science university be put in the same category? Inevitably, the parameters are generalised, which cannot do justice to the many ways that teaching is conducted in different types of institutions. Of the five clustered parameters, the one that includes teaching does not directly assess teaching quality. Students’ and alumni’s evaluations are not used. Types of teaching that fall through the cracks include practical, experiential methods and internship, relevant to training in, say, law or management. Assessment of research inclines towards the quantitative; publications — the number, not the standard of the journals publishing the papers — and citations are prioritised. Neither quality nor long-term research with no immediate results adds much value.

The methodology should be more transparent, feel some academics. Often it is not clear whether an institution has not been ranked because it is not good enough or because it did not participate. Besides, ensuring the authenticity of the data supplied by the institutions is apparently not possible. That hurts the credibility of the exercise, particularly since it is possible for institutions to curate the data to focus on the parameters. One of these is perception, which includes the points of view of academics and potential employers, but not of students. It depends on who joined the exercise. Naturally, such a requirement would be open to bias. Greater attention to the spirit and aim of a broad-based education and the efficiency of professional courses — the two are different — would bring about the necessary changes and make these rankings more meaningful.

Follow us on:
ADVERTISEMENT