Big data and cybercrime require far sharper focus

HSE ransomware attack shows we must presume we cannot always protect our data

The ongoing effects of the ransomware attack on the Health Service Executive has given the Irish public an unsettling insight into the dangers of such attacks. Hospitals across the country were affected by the hacking of HSE computer systems by an international cybercrime group, which has threatened to release stolen data unless a ransom of nearly $20 million is paid. The infiltration has disrupted diagnostic and referral services as well as forcing the cancellation of many appointments, in what has been called possibly the most significant cybercrime attack on the State.

This is the latest major incident in a surge of ransomware demands targeting government agencies and companies worldwide. Just a week before, American fuel supplies were hit by an attack on infrastructure servicing the entire east coast, resulting in widespread shortages. Colonial Pipeline has since confirmed that it agreed to a ransom payment of $4.4 million in order to restart operations. This week saw an attack on the world’s leading meat processor, JBS Foods.

Experts say such attacks are becoming both more severe and more common, with networks being compromised through weak passwords, phishing emails or out-of-date software. Some attacks are sophisticated enough to leave organisations at a loss as to how to resolve the issue. The malware used against the HSE has been described as a “zero-day” attack since it exploited a previously unknown security flaw. This lack of warning means organisations have no time to take preventative measures, giving them little option but to shut down infiltrated systems.

The average system user is just a spectator on the sidelines of the sweeping hacks being carried out by cybercriminals

It is proof – if proof was needed – that we need a bigger conversation about the challenges that come with managing big data. Integrated systems obviously provide many benefits – even in terms of something as simple as your new GP being able to pull up the results of tests from two years ago. But moving away from a multitude of small systems to one large, interconnected one also means a shift in both the nature of risks and the locus of power to prevent those risks.

READ MORE

Companies that design and sell such systems have an incentive to be overly optimistic about their ability to make these larger systems secure. Attempts to regulate the development of such technologies and make them more secure are not infrequently met with accusations that this would only slow down innovation, thereby increasing costs. Fears of losing ground on the race to the next big company may feel somehow more tangible than the potential harm caused by unimagined data storage vulnerabilities.

We talk a lot about how employees can reduce the risk of cybercrime, but the increasing complexity of ransomware attacks calls for more than just individual responsibility. Companies these days send out all sorts of emails about data security, especially now that staff are generally working from home. The average system user, however, is just a spectator on the sidelines of the sweeping hacks being carried out by cybercriminals. We play very little part in the work being done behind the scenes by well-compensated experts to design systems so complex we as users can hardly imagine them, much less make them secure.

Science fiction writers in the US have advised homeland security on ways of addressing national security threats

What is required is a new way of vetting technologies: an approach that goes beyond merely “do not harm”. Rather than this kind of precautionary principle, we need a disaster principle – assuming the worst and working backwards from there. Some components of some systems might be managed differently, and better, if we work on the basis of knowing that we can’t always protect them.

So what kind of regulation might help us understand and negotiate the hazards of big data in this way? I would suggest for a start that any system over a certain threshold of size, value or risk (if it fails) should be certified – but not just by engineers. At least three further competences should be added to the mix, starting with science and technology experts who understand the social harms and benefits associated with such innovation.

We also need to look to experts in the cultural landscape of the market where a technology is to be deployed. For example, Facebook should never have been allowed to enter Myanmar without appropriate linguistic expertise and cultural sensitivity, as we know from the violent repercussions of hate speech on that platform. The problems of artificial intelligence that has been “trained” on a country’s “normed” data, such as the colour and shape of faces, would easily be caught by someone looking at the technology from this perspective. And we could also account for differences in work practices this way as well: after all, sometimes the difference between a data security guideline you can follow, and one that seems to take your day over, has more to do with your environment than your willingness to comply.

The idea that these kinds of technological developments take as much as they give is not new

Experts in science fiction could have a role to play here as well. Who else is equipped to imagine what a technology might do, and to inform our attitudes toward how it might be (mis)used? Science fiction writers in the United States, for example, have previously advised the department of homeland security on innovative ways of addressing national security threats.

The challenges posed by data systems are not unprecedented in nature. “Knowledge technologies” of all kinds – from your calendar app to your search engine, and from your smartphone to your smart speakers – are depended on more and more as they scale, but that scale also opens us up to new risks. The idea that these kinds of technological developments take as much as they give is not new.

In his Phaedrus, composed around 370 BC, Plato refers to writing as a “pharmakon”: a drug that in some doses can promote health, but can kill if in large enough quantities. After all, a collection of manuscripts can almost certainly hold more knowledge than the average individual orator, but it could also burn and be lost utterly.

If technology is a pharmakon, then we need to treat it with the same caution as a drug – and be prepared for its potentially devastating consequences. The way to do this is to embrace the fragility of the systems, rather than deny it, and to bring some real creativity to how we manage them.

Dr Jennifer Edmond is associate professor of digital humanities in Trinity College Dublin