Subscriber OnlyOpinion

Ireland sleep walking into a calculated Leaving Certificate grades disaster

As currently configured, the system for the Leaving Certificate has the same basic components as used across the UK

After climbdowns from governments in Edinburgh and London on their calculated grades systems,  Ireland’s exam authorities need to avoid the same breakdown in confidence. Photograph: Alan Lewis/ Photopress
After climbdowns from governments in Edinburgh and London on their calculated grades systems, Ireland’s exam authorities need to avoid the same breakdown in confidence. Photograph: Alan Lewis/ Photopress

Since the publication of Scotland’s exam results in early August there has been unprecedented tumult in the UK’s education system, with the fairness of this year’s results thrown into question. After climbdowns from governments in Edinburgh and London on their calculated grades systems, can Ireland’s exam authorities avoid the same breakdown in confidence? Will they jump, and if not, can they avoid being pushed?

The cancellation of exams at the beginning of summer created an unprecedented policy challenge. How can you recognise and certify the achievements of young people without national exams, in a way that facilitates their progression, while ensuring fairness to individuals, along with maintaining standards across schools, and across years?

This is a “short blanket” problem, as Champions League-winning manager Rafael Benitez once put it. When you cover your head, your feet are cold. When you cover your feet, your head is cold. It is virtually impossible for an algorithm-based system to simultaneously meet all these goals.

This is how it has proved in the UK, but in Ireland, while the delayed delivery of results has added to the stress of students, it has had the benefit of buying time. As currently configured, the system for the Leaving Certificate has the same basic components as that used across the UK: teachers allocate grades and rank pupils in each class, the grades and rankings are then statistically moderated by an algorithm that adjusts grades to reflect historical averages.

READ MORE

Comparability and fairness

The problems with this method range from the philosophical to the statistical: 1) The fundamental premise of the system is a category error. Assigning young people their place in a national distribution based on the historic performance of their school is not a measure of individual merit. While the desire for authorities on both sides of the Irish Sea to frame these grades as equivalent to any other year is understandable, this sets the results up to fail.

Once a human face replaces a number, the moral unsustainability of telling a young person that they have failed, because someone in their school failed in a previous year, becomes clear. For four months, regulators and statisticians focused on perfecting the omelette. It was only when results were released were they reminded of the eggs.

2) In the UK, the algorithm prioritised comparability across years above all else. Consequently, the teacher-assessed grades have been effectively ignored in the vast majority of cases, and grades calculated on the basis of historical results and the submitted rank order. This decoupling of the results from the grades allocated carefully by teachers has been the root of many of the issues, and returning to those grades has provided the only sanctuary found by backtracking governments. While comparability is important, particularly for next and last year’s cohorts; in an unprecedented situation it is not clear that this should trump other considerations, including fairness to less privileged groups.

3) The algorithm failed to account for outliers; in the main, high performing young people in low performing schools. Seeing talented young people in challenging circumstances stripped of their A grades and Oxbridge university places has been the source of much public outrage. It is the antithesis of what people believe and want education to be, a system for generating opportunity, for succeeding in the face of barriers. Here, the barriers were built and buttressed by the authorities themselves. Ireland’s guidance document promises that outliers will be treated differently. It is vital that this happens in practice to maintain confidence. Any result downgraded by two or more grades should be automatically reviewed, before results are published.

4) A variety of other statistical quirks contributed to the overall sense of unfairness. Schools with low numbers of students in particular subjects escaped the statistical moderation process, because of the difficulty in generating reliable estimates with small classes. This effectively served to benefit private schools, whose results inflated more than the state sector. The algorithm also chose to “round down” when assigning pupils to grades, which led to the enforcement of failing “U” grades in cases where a student was judged by teachers to be nowhere near failing.

No flexibility

While Ireland has time to learn from these mistakes, it also has a unique vulnerability compared to the UK: the mechanistic nature of its university admissions system. The last few days have seen many English institutions line up to declare leniency for those who performed below expectations and failed to meet the conditions of their university offer. The Irish points system leaves no such room for flexibility, however. While both countries share a “high stakes” approach to examinations and university admissions, when it comes to this year’s algorithm the stakes are even higher for Irish students.

But while the technical and procedural mis-steps made in the UK are avoidable, Ireland should also prepare itself for the philosophical reckoning. If a system where socioeconomic patterns of inequality enforced by an algorithm is an outrage to public decency, then why do we tolerate the same pattern every other year?

Carl Cullinane is head of research and policy at the Sutton Trust