Philosophers key as artificial intelligence and biotech advance

Just because something can be done with technology doesn’t mean it has to be done

Studying philosophy may be about to pay off. Unlike the more utilitarian subjects like law or engineering, the conceptual world of Kant, Mill, or even Nietzsche hasn’t provided a route to riches for many to date. That may be about to change in the converging world of artificial intelligence, big data and biotech.

For decades philosophy students have grappled with theories like the trolley problem – where a runaway trolley car may hit and kill two children or, with intervention from the person on board, its occupant. The advent of autonomous vehicles means this philosophical dilemma is front and centre for programmers. When the first fatality arises on the back of a pre-programmed decision by a car, the philosophy underwriting the car’s decision of whom to save and who to sacrifice will be in dock.

In the arena of biotech, we are starting – albeit still at very early stages – to decipher the biochemical algorithms that control us. According to Yuval Noah Harari in his new book 21 Lessons for the 21st Century: "feelings are biochemical mechanisms that all mammals and birds use in order to quickly calculate probabilities of survival and reproduction. Feelings aren't based on intuition, inspiration or freedom – they are based on calculation." Therefore, he argues, given enough biometric data and computing power, systems can hack all your desires, decisions and opinions. They can know exactly who you are.

Black Mirror

It's all very Black Mirror, but it's also potentially closer to reality than many realise.

READ MORE

The potential of neuroscience is not new, nor its role in the debate between the influence of nature or nurture on human development. Engineers are already developing software that can detect human emotions. And they are doing so from the perspective of scientific insight, seemingly free from the ethical dilemmas these raise. John Holden recently reported here on how a Stanford University study used artificial intelligence (AI) technology to guess people's sexual orientations by analysing their headshots. The machine turned out to be worryingly accurate.

Researchers have shown how a novel machine-learning algorithm needed nothing more than a few photos of a person’s face in order to identify them as gay or straight.

Using a sample of more than 35,000 facial images taken from an unnamed online dating website, the robot developed at Stanford was able to correctly classify gay and straight men 81 per cent of the time, and gay and straight women 71 per cent of the time, when provided with just one image of the subject.

In his new book, Harari depicts a futuristic scenario where North Korea’s Kim Jong-un uses biometric sensors to pick up telltale signs of discontent in individual citizens. Higher blood pressure or increased activity in the amygdala upon sighting a picture of the Great Leader and “you’ll be in the Gulag tomorrow morning”. Orwell, it seems, was incredibly prescient about the powerful threat of technology.

Sci-fi pie in the sky? Consider China. As Clifford Coonan reported last year, the Chinese government has introduced a data-driven social credit system to force its citizens to be honest and trustworthy, ranking them by good or bad behaviour, with performance-related rewards and punishment. Using algorithms to keep people honest, the central message is: "Forge a public message that to keep trust is glorious".

Due to roll out in 2020, the social credit system uses big data technology to collect information on all citizens and analyse that information to rate behaviour, including financial creditworthiness and personal conduct. The platform connects 37 government departments and agencies, with a commercial aspect through the People’s Bank of China and the ministry of finance, but also the food and drug administration, the national health and family planning commission, the transport, housing and urban-rural development ministries.

Data trawl

The convergence of this data trawl with biotech, has clear potential to be even more sinister. The critical issue in all this will be the ownership and control of such data.

The appetite for innovation and invention and the potential benefits to health and wellbeing will drive us forward, but we need to consider the nihilistic viewpoint at each stage as well.

The challenges faced by the precariat – those who lack job or economic security at a time when technology is replacing the physical skills and increasingly the cognitive skills they brought to the labour market – have been attributed to the rise of populism. Perhaps we are already seeing some early iterations of the political backlash from the rise of big data.

The problem, according to Harari, is that “democracy in its present form cannot survive the merger of biotech and infotech”. As the algorithms come to know us so well, every process request and application, from bank loans to health insurance, will be determined by the algorithm, not by the human.

Add in the ability to monitor, predict and influence our emotions and it starts to become clear why philosophers are debating the determination of technology: just because it can be done doesn’t mean it has to be done. Time to pick up a bit of Plato.