Why are so many Leaving Cert students being upgraded?

Ask Brian: Clerical errors can often be to blame – but there may also be a hidden factor

Students from Mount Carmel Secondary School, King’s Inns Street, Rotunda, Dublin, celebrate their Leaving Cert results. Photograph: Tom Honan

Students from Mount Carmel Secondary School, King’s Inns Street, Rotunda, Dublin, celebrate their Leaving Cert results. Photograph: Tom Honan


My child eventually started college – three weeks late – after being upgraded in the Leaving Cert and securing his first-choice course. What’s wrong with the correction process that leads to children having to appeal to secure grades they are entitled to?

The State Examinations Commission, which manages the Junior and Leaving Cert, oversees a task of Herculean proportions every year.

Individually and collectively, they strive to provide every student with the best possible chance to demonstrate their ability in a set of exams which are anonymous and where every paper is examined on an equal basis.

However, when students receive their grades each year, a growing number of students feel, often with the support of their subject teachers, that they should have received higher marks for some of their work.

This year, on foot of appeals by students, almost 3,000 Leaving Cert exam results – or 17 per cent – were upgraded. In many cases, this led to students receiving higher-preference college offers through the CAO.

The commission has previously fully acknowledged that examiner error can happen in a system as large as the Leaving Cert, with 400,000 individual grades.

Administrative errors

Some upgrades can often be down to simple administrative errors or other elementary mistakes.

But having discussed this issue with those who correct papers each year and manage the overall process, I have come to the conclusion that much of the problem may also lie in the underlying assumption that the grade distribution of each year should closely replicate that of the previous year.

Let me explain. Given that each teacher correcting a batch of exam scripts gets a set of different papers, one might assume that the overall pattern of results in each batch of 400-500 scripts would be relatively similar across the country.

Underlying such an assumption would be a belief that each batch contains equal numbers of high-performing and modestly-performing students.

However, we know that in some areas of the country, particularly in south Dublin, children from affluent backgrounds are more likely to have greater academic supports, such as access to grind schools. This can enhance performance in an exam system which tends to reward memory and rote learning.

Academically stronger

Could the explanation that a cohort of students every year are being upgraded be linked to the fact that some batches of scripts are academically stronger than others, by virtue of the schools from which the batch given to the correcting teacher is drawn?

If the supervising examiner as well as the correcting teacher is consciously or unconsciously aware of the expected pattern of grade distribution, it is very easy to see how a proportion of grades in a strong academic batch get suppressed downwards to conform to the expected national profile of results, more commonly referred to as the “bell curve”.

When those scripts are sent for remarking in September, the appropriate higher grade is awarded because no “bell curve” is now in play. My explanation is purely speculative.