Way back in December 2022, a small research company called OpenAI released a beta version of its Large Language Model (LLM) into the ether.
Early users and tech reviewers heralded its arrival as a new dawn for humanity, an artificially intelligent system that would revolutionise our world – finding answers to our most challenging problems in health, energy and climate action.
ChatGPT, like other disruptor technologies, went viral within a day, including in education. Work that seemed just a bit too good, too polished, too full of American spellings to be real, started to appear on teachers’ desks with increasing regularity. Most worryingly, this is now happening in the case of Leaving Certificate project work.
This academic year has seen the roll-out of Additional Assessment Components (AACs) worth 40 per cent of the final Leaving Certificate grade in subjects. These AACs are mostly project-based assignments. The nature of the project will vary across subjects and some AACs are more AI-proof than others, but many can be completed with the assistance of AI.
READ MORE
Much of the talk around AI in schools has understandably centred on students’ use of it and the ways in which this might undermine the academic integrity of State exams. This is a serious issue, with many of the concerns of teachers and their unions being essentially shrugged off. Given the potential for students to use AI tools to assist with their AACs, the State Examinations Commission (SEC) has, belatedly, issued guidance to students and their teachers on the use of AI for these projects. The Coursework Rules & Procedures 2025-2026 is an attempt to clarify the permissible and non-permissible use of AI. What these rules are, in essence, is an acknowledgment that AI exists, that students will use it, and tasks teachers with ensuring that they are using it minimally.
The SEC’s attempt to impose guardrails on the use of AI is necessary, but the rules as outlined are too vague, providing only a few examples of acceptable and unacceptable use. More importantly, however, they are divorced from any kind of reality.
When it comes to students’ use of the technology, what we are trying to battle against is human nature. We have evolved and survived as a species by finding ways to conserve energy and to outcompete other animals using whatever means necessary. Artificial intelligence gives us the chance to do both of those things. Many students will adhere to the rules on the ethical use of AI, but many will not. Who can blame them? Particularly when they are being pressurised by a competitive points race.
When we accept, as the Department of Education has, that students will use this technology to complete coursework, we are inviting into our education system a resource that enables cheating.
But students’ potential unethical use of AI is only one component of the damage to education from the use of these tools. Some of the lesser-flagged issues are worth elucidating, such as the educator’s use of it and the doublespeak around the aims of our education system. One of the unspoken aspects of the debate on the use of artificial intelligence in schools relates to teachers’ use of it. Last year, the department finally issued guidance on the use of AI in education, focusing on its use by teachers and school management.
The guidance provides “use case” examples and scenarios describing AI’s potential uses. In all the use case examples, teachers use the tool and then inspect and adapt what it produces. Sentences such as “The teacher reviews these outputs carefully, adapting them to the student’s individual needs and school context before implementation” capture the assumption that while the teacher will use AI to “support teaching and learning” (it never mentions timesaving), they will also check the output to ensure it is suitable.
In reality, will time-poor teachers methodically evaluate the resources the artificial intelligence produces to identify gaps, adapt the resources and check for “hallucinations”? Many will, but many will not.
When it comes to assessment, will teachers who use artificial intelligence to grade students’ work and provide feedback, check the AI grade against what they would give, and analyse the feedback for inaccuracies? What about the ethics of feeding students’ data (their essay or project) into artificial intelligence? Have the students given consent for their coursework to be fed into an insatiable machine greedy for data points to enhance its ability and augment the profits of notoriously opaque, untransparent corporations whose ethics are questionable at best?
The department’s guidelines at no point suggest that teachers use artificial intelligence to grade students’ work or provide assessment feedback, but the reality is that they are, and they will. In England, the department of education has said that teachers can use artificial intelligence for “low-stakes” grading, guidance that education secretary Bridget Phillipson said “cut workloads” for teachers. AI trained on specifically Irish data is coming on stream, including Pulc, a tool that grades students’ work and is trained using data from the Irish syllabus. Have we considered the consequences of using artificial intelligence to grade students’ work, and canvassed our students’ views on their teachers using AI to generate either grades, feedback or both?
Students in the US were asked by the New York Times for their opinions on teachers using artificial intelligence to grade their work. Many pointed out the hypocrisy of teachers telling students not to use AI and then using it themselves. Others worried about the quality of the feedback provided by artificial intelligence. As one noted: “How can we expect an algorithm to grasp the individuality of a child, the mistakes they made, and furthermore, the reason behind them.”
Schools across Ireland are now looking at bringing in policies related to the acceptable use of AI. Will these policies include sections on teachers’ use?
Ultimately, we need to ask ourselves: do we want to move towards a school system where students produce AI-generated assessments and are then handed back AI-generated grades and comments in a bizarre feedback loop where humans sit on the margins of education?
There is also a broader philosophical problem. There is now a doublespeak that exists in education regarding, on the one hand, the cultivation of what are sometimes called “21st century skills”, such as critical thinking, problem-solving and creativity, and on the other hand, the embracing of a technology that systematically undermines creativity, problem-solving and independent thought. These “21st century skills” or “future skills” have led curricular development over the past decade. The new Junior Cycle introduced in 2015, for example, was an attempt to move from a curriculum focused on content to one focused on skills. The redeveloped senior cycle also has a set of key competencies at its heart.
While many see the newly introduced AACs at senior cycle as an attempt to reduce exam pressure (they won’t), they are part of a pedagogic move away from content and towards skills. In the process, how we assess students has had to change from terminal exams, based on how much information you can cram into your head, to project-based work that is supposed to test your problem-solving, critical thinking, etc.
Our education system from primary to secondary now emphasises the fact that education is about the holistic development of students to prepare them for life as citizens and workers in an age of rapid change.
Enter artificial intelligence, a technology that systematically, whether by design or accident, strangulates human creativity, critical thinking and independence of thought. People now routinely ask AI for information and life advice, accepting the answers without performing any due diligence. When we enter a question into the search engine and the AI-generated answer pops up, who actually clicks into each website the artificial intelligence references? AI is perfectly designed to force our critical faculties to take a nap.
In the department’s guidance on artificial intelligence, they provide a telling use case for teachers: “In a brainstorming session, students provide individual ideas on a topic. GenAI is used to organise these inputs into categories and generate a visual mind map. The teacher then uses the mind map for questioning, group discussion, and connecting new material to existing knowledge”. In this scenario, the teacher is herself using AI to perform one of those lauded critical thinking skills that schools are supposed to be cultivating.
If we are committed to developing creativity, critical thinking, problem-solving and other such lauded competencies, why invite a technology that undermines all of these elements of human intelligence into education?
The use of AI in our education system among students has the potential to undermine the credibility of our examination system, a system that, for all its flaws, is as objective and fair as any that has ever been designed. Teachers’ use of artificial intelligence is equally concerning for the reasons touched on above.
Ultimately, AI is a threat to what we purport to see as the essence of education: the development of independent-minded and creative thinkers. As a recent Organisation for Economic Co-operation and Development report stated, while AI has potential benefits for students and teachers, it comes with enormous risks. An overreliance on artificial intelligence, the report states, risks “turning students into passive consumers and teachers into supervisors”.
AI’s application in education, in the form our Department of Education intends it to be used, should be resisted until we are satisfied that we can control these inherent risks.
- Alan Curran is a secondary schoolteacher in Co Galway, and a Social Democrats councillor in Galway City West











