Along with financial records and accounts, personal health and medical information comprise the most sensitive, exploitable data associated with an individual. As a general point, this is widely recognised in international and national privacy protections.
Yet the very organisations – national governments, state agencies, research divisions – which not only should best understand these sensitivities but are also tasked with the responsibility of safeguarding such data, regularly fail in their guardianship role.
Just look at what has happened in the UK, where the identifiable health records of millions of National Health Service (NHS) patients were given on an ongoing basis to Google in a private deal struck in 2015 between the multinational search giant and one of the NHS trusts, the Royal Free hospital trust in London.
The deal was with DeepMind, an artificial intelligence company owned by Google, which, as Cambridge professor John Naughton has noted, "had no previous experience in healthcare".
In an excoriating piece in the Guardian he explored the thorny issues with this deal. It was a deal which both the UK's national data guardian, Dame Fiona Caldecott, and the UK's information commissioner ,Elizabeth Denham, declared an illegal violation of the protections granted to personal health data.
Yet, as Naughton documents, the deal was arranged privately between consultants at the trust who approached DeepMind about using such data to create an app (it's always an app these days, isn't it?) to help manage acute kidney injuries. Only six months after the project was under way did the public first learn about it.
The end goal, as so often with these clumsy projects, was worthy: to manage a disease that causes 40,000 deaths annually in the UK. But to export the health records of 1.6 million patients for such a project, especially to a company without experience in managing health data, should have triggered a cautious, transparent process of information management, and the engagement of protections that didn’t happen.
Why? And why is this another example of an eager rush to do “cool stuff” with sensitive data that turns out to be so poorly thought through, a betrayal of trust involving 1.6 million citizens and – let’s not mince words here – illegal?
Is it naivety about the sensitivity of personal information and how it might be exploited? Or commercial greed to feed an artificial intelligence engine the needed data that might drive further commercial projects? Or a rush to embrace “the cyber” simply because it’s there?
Probably all of the above, which is no excuse at all. As Naughton points out, apologies, no matter how contrite, should not absolve companies, trusts or governments, of responsibility, nor shield them from prosecutions and fines.
These presumptive and ill-informed attitudes and approaches to health data are widespread. Just look at what is happening here in Ireland, where the Government keeps steaming ahead with vast, linked data-gathering projects.
We now have three different, big State database-building projects: the Individual Health Identifier (IHI) project and its services card; social welfare’s Public Services Card (PSC); and the Department of Education’s POD (primary online database), a primary school database intended to gather information on students from primary school through to adulthood.
All of these gather sensitive and revealing personal information – but alarmingly, not into one central, and centrally protected, database, but into separate department databases, creating multiple points of potential breach and failure. And each department can add in data from other departments in a way that isn’t transparent to the citizen.
POD gathers very revealing data on individuals through their entire schooling experience, from primary through to third level and beyond.
The IHI will bring together a broad range of data. One IHI project currently collects DNA profiles from epilepsy sufferers, and, like the NHS health project, clearly has good intentions. Yet detailed DNA analysis from an individual could be generalised to all blood relations. And researchers are generally funded by commercially motivated, secretive, third-party companies, such as DeepMind.
The intention with the PSC is that eventually everyone in the State will have this card and it will link to a collection of some of the most sensitive data the State holds on a citizen.
Current legislation allow these databases to grow and be shared without clearly defined limitations. Add to that: the State plans to bring in a new data protection act which exempts government agencies from the significant fines contained in the incoming EU General Data Protection Regulation (GDPR). The State is also trying to water down GDPR-granted citizen rights that allow the equivalent of class action court claims against the State.
Yet, because it gathers and holds so much personal data, the state is and will remain the primary source of risk to our personal data.
As they say: what could possibly go wrong?