The announcement that legislation may be brought before Cabinet to permit An Garda Síochána to use artificial intelligence and facial recognition has been greeted with concern by the Irish Council for Civil Liberties among others. This concern is well-founded. The use of facial recognition by law enforcement authorities in other jurisdictions — in Europe and North America in particular — has illustrated all too clearly the risks to fundamental rights and equality that such technologies can pose. In fact, what is most striking about the move towards using facial recognition technologies in this jurisdiction is how out of step it is with practice in other countries where similar mechanisms are being rapidly limited or abandoned over fears that they exacerbate discriminatory patterns in policing and are, fundamentally, ineffective.
Local police forces in the United States were among the first to adopt facial recognition and artificial intelligence tools and have, equally, lead the move towards banning the use of those tools more recently. Police forces in Berkeley, Oakland, Summerville, and San Francisco California as well as Portland, Oregon have banned facial recognition in their cities. In 2019, California passed a three-year moratorium on the use of face recognition technologies in police body cameras. In 2020, Massachusetts passed legislation that banned the use of facial recognition by most government agencies and established a commission to monitor and evaluate the use and impacts of similar technologies in the state. Bans have also been passed in New York, Vermont, and Washington state, while others including Michigan and Minnesota consider similar laws.
This move away from facial recognition has come, in part, as a result of work by civil liberties groups and researchers whose work has demonstrated that facial recognition technologies are often discriminatory, ineffective and result in real-time infringements of the fundamental rights of individual citizens.
It is now well established by researchers that facial recognition technologies display marked biases in identifying people of colour and women. A 2018 study by researchers at MIT and Stanford University found that commercial facial recognition technologies misidentified non-Caucasian women 20 per cent more frequently than their Caucasian counterparts. The error rate for Caucasian men was under 1 per cent. A year later a study from the National Institute of Standards and Technology in the US produced similar findings – noting that facial recognition technologies frequently misidentified individuals of African and Asian descent.
In 2020 the UK Court of Appeal confirmed the legal impacts which these biases could have. The court found that the use of facial recognition technologies by the South Wales Police had resulted in breaches of privacy and data protection, but had also breached national equality laws. In particular, it noted that the Welsh police had not taken sufficient steps to ensure that the software they relied on did not display a bias in identification based on race or sex.
This is, of course, a cause for significant concern as it exposes individuals with a particular ethnic, racial or national identity to the risk of being wrongfully identified as being involved with crimes. It can also expose groups of individuals with similar characteristics to more intense policing, and more targeted surveillance.
The errors which characterise facial recognition technologies are not only concerning from the perspective of their discriminatory impacts; they also highlight the ineffective nature of facial recognition technologies in many cases. In 2019 researchers in the UK found that the facial recognition system used by London Metropolitan Police had an error rate of 81 per cent, meaning that four of every five individuals identified as suspects by the technology were innocent.
Facial recognition software generally operates by scanning crowds to capture and record the facial characteristics of the maximum number of individuals possible. This indiscriminate collection of data has obvious implications for individual privacy; increasing surveillance of individuals entering and interacting in public spaces. This alone is concerning. It is not only the right to privacy that is impacted by these kinds of technologies, however. Monitoring spaces using facial recognition may mean that protests and political or religious events are monitored and the faces of attendees are recorded. This necessarily means that the privacy of these individuals is reduced, but it may also discourage individuals from attending these events, or expressing themselves as they might otherwise do were they not being monitored.
In this respect, the right to privacy acts as a bulwark against the erosion of other, equally important, rights.
While the legislative basis and regulation for the use of facial recognition technologies by the Garda have not yet been made public, several basic points should be kept in mind. The first is that the images collected through facial recognition are considered biometric data in EU law. Biometric data is based on physical, physiological, or behavioural characteristics and includes identifying data such as fingerprints, blood vessel patterns, and iris or retinal structures. Under the GDPR such biometric data is subject to additional protections above those given to “non-sensitive” personal data.
The circumstances in which biometrics can be processed are limited and must be strictly interpreted. The European Court of Justice has been sceptical of broad and indiscriminate collection of data concerning citizens, in particular where that data reveals information about their private lives. In the context of biometric data, this concern would only be heightened. Indeed, in October 2021 the European Parliament adopted a resolution that called for a moratorium on the use of facial recognition in public places and on the use of AI tools in policing given their varying degrees of reliability and accuracy and the potential impacts of such technologies of fundamental rights.
Given this context, the announcement that gardaí may begin to use facial recognition technologies and AI tools has, understandably, raised concerns over how (and whether) the experience of other jurisdictions has been taken into account. The precise legal basis for any use of such technologies and the practical and legal mechanisms imposed to regulate how they are used may resolve questions about whether, and how, an Irish system of facial recognition would avoid the allegations of fundamental rights infringements and discrimination which have arisen elsewhere. Concerns over the accuracy and efficacy of the underlying technology may not be so easily resolved.
Dr Róisín Á Costello is assistant professor of law, Dublin City University