Tension between AI and personal rights a growing problem

‘Right to explanation’ would boost rights but might slow technological advances

Speaking at the Sorbonne in September 2017, French president Emmanuel Macron made clear his ambitions for Europe to become a global leader in the field of artificial intelligence (AI). The European Commission is presently working on a strategy on this and is set to deliver a communication on it in the coming months.

But there is much uncertainty over how ambitions such as Macron’s can be attained. Much will depend on how the European Court of Justice and member states interpret a number of key provisions of the General Data Protection Regulation (GDPR). This is the mammoth culmination of the European Union’s five-year effort to make European data-protection law fit for the 21st century. It is due to come into force on May 25th.

A right to explanation would enhance individual rights and protections. But it might slow down or inhibit the advance of the new technology

There is a clear and growing tension between what the technology can do and what human beings require and are entitled to in order to maintain their personal rights. One area with significant potential for conflict is machine-learning and automated decisions. There is speculation that GDPR will give rise to a citizen’s “right to explanation” when subjected to automated decision-making.

Algorithms

Addressing the French national assembly last month, digital minister Mounir Mahjoubi said: “Any algorithm that can’t be explained can’t be used by the civil service.” Similarly, Roberto Viola, director-general of DG-Connect, which manages the EU’s digital agenda, has stressed the need to “make sure that people are in control and understand when algorithms run the show.”

READ MORE

A right to explanation would enhance individual rights and protections. But it might slow down or inhibit the advance of the new technology.

Many of the GDPR’s provisions are already in effect under the existing Data Protection Directive of 1995. What GDPR will do however, is strengthen several elements of this, in the expectation that it will “drive new behaviours”, as described by the Data Protection Commissioner, Helen Dixon, at last year’s Dublin Data Summit.

Consent to data processing will now need to be indicated with “a statement or a clear affirmative action”. New obligations have also been introduced in relation to data anonymisation, cybersecurity and data-breach notifications. Most dramatically, fines of up to €20 million or 4 per cent of total worldwide annual turnover can be imposed on organisations that fail to comply (with the proposed exception, in Ireland, of State bodies). It will require much adaptation, but not as much as people might think.

Whether we know it or not, most if not all of us are the subject of automated decision-making on a daily basis. This can happen when one’s picture is scanned by a social network or when one asks the personal assistant on one’s phone to translate something. Automated decision-making will become increasingly common when we step into a driverless car or have our cancer diagnosed by AI.

The dialogue to date between technology and political authority in Europe has too often been superficial and lacking in mutual understanding

Systems such as these have shown immense potential, but they are not without limitations. One of the most challenging of these is that the multilayered data calculations and computations involved are so complex that it is often not possible to discern cause and effect. They are, essentially, “black boxes” that can defy explanation.

Rights

But under articles 13,14, 15 and 22 of the GDPR, European data subjects will have rights in this regard. These include the right “not to be subject to a decision based solely on automated processing” with exceptions, and where these exceptions apply, the right to be informed of “meaningful information” about the “logic involved” when automated decision-making occurs.

Ultimate oversight of AI must reside in accountable, human-governed institutions. But the reality is that Europe’s digital economy is critically underdeveloped. This must be remedied as the age of AI advances upon us.

Ways must be found to make legislative systems more responsive to rapid technological change (already many are speculating that the GDPR is likely to be superseded by decentralised data in the form of blockchains). Furthermore, technologists must be required to design with ethics in mind.

There are things that can be done. The creation of an artificial intelligence council or agency, as suggested by Luxembourgish MEP Mady Delvaux in her report to the European Commission on Civil Law Rules on Robotics published in January 2017, would be an important step here. A regularly-updated digital literacy handbook would also be immensely helpful. This could establish a baseline-level of understanding, a framework, for both policymakers and technologists.

The dialogue to date between technology and political authority in Europe has too often been superficial and lacking in mutual understanding. This has to change if the profound implications of machine-based decision-making are to be optimised.

Neil Brady is digital policy analyst with the Instiute of International and European affairs