University rankings – what are they for?
Sir, – The lavish treatment that The Irish Times provides when each new world university ranking is published has consequences that are both good and bad (“UCD closes gap on Trinity in latest university rankings”, September 30th). The good consequences depend on the degree to which the ranking is based on significant material factors, such as staff-student ratio, expenditure per student, proportion of classes taught by adjunct faculty, faculty teaching load, and percentage of research-active full-time faculty. The ranking system can then pinpoint where the university is failing because of lack of investment or misdirected investment.
The Government may even, in consequence, improve funding or, better yet, the funding model. These material factors do make a difference between a good and a bad education. My experience is in an English department with a staff-student ratio that ranged between 25/1 and 35/1.
Most of the coursework is necessarily through massive lectures, with a smaller proportion left for small-group seminar opportunities. Half of those undergraduate seminars are then taught by graduate students or adjuncts. The teaching load for full-time staff was sharply increased after 2007.
In such conditions, teachers have a hard time helping undergraduates learn to write, think critically, speak cogently, or know their own minds. Meanwhile, in fear of a drop in rankings, faculty effort is forcibly redirected by management teams away from already impoverished undergraduate programmes toward the pursuit of grants, postgraduate students, and publications in certain journals.
The panic about ratings causes university management teams to spend money on PR in order to affect the opinions of those who rank universities. They may formally request all staff to lobby colleagues in foreign universities for a higher rating on questionnaires. They may insist on an increase in the quantity of postgraduates without respect to quality. They may demand that faculty publish only in journals the university ranking systems tabulate.
When the only aspects of quality considered are those that are digitally measured, a great divide can open up between appearance and reality. Time is spent on the manipulation of measures rather than on the search for truth or the betterment of life.
Ratings may go up (in the short term) and the actual quality of the university sink.
These management-driven efforts to perform for the rating system can have a bad effect on scholarship as well as teaching.
For instance, “Irish studies” – that is scholarship on Irish history, literature, drama, archaeology, language, etc – is a field in which, unsurprisingly, Irish universities excel, but there are no Irish studies journals at all in the Arts and Humanities Index, the listing used by the QS rating system.
But to confine humanities scholars in Ireland to publishing only in the long-established journals on that index (as administrators have urged) would retard the search for truth about this culture and stymie scholarly endeavours that have led to “Irish studies” becoming the internationally significant field that it is today.
The audacity of one president of a Dublin university, and then another, in lobbying for a higher proportion of the national higher education budget going to his university alone, in order to have one really high ranking university in the QS and TES tables, is another extraordinary and unhealthy outcome of the focus on competition.
One final bad consequence of this country’s recent fascination with university rankings is that it can cause management teams to lose sight of the full purpose of education. In trying to sell the Government on the importance of investment in the third-level, university presidents have acted as if occupational training and marketable innovations (the “knowledge economy”) were the be-all and end-all of the idea of a university. Perhaps that really is their opinion.
For a different view, read Cardinal Newman’s “Idea of the University”. – Yours, etc,