Are tech companies threatening freedom of thought and democracy?
The key question is where to draw the line between legitimate influence and unlawful manipulation
Reports that Cambridge Analytica were able to access tens of millions of Facebook profiles, and use these to target people with personalised political advertisements, raise a number of concerns. For us, this is an example of why the ability of tech companies to deploy their technological, psychological and sociological expertise to influence users’ thoughts, feelings and behaviours is one of the central ethical issues of our generation. Indeed, we are concerned that such activities threaten our right to freedom of thought and, by extension, democracy itself.
The right to freedom of thought is enshrined in the Universal Declaration of Human Rights, which is recognised as customary international law. It is understood to have three key elements: freedom to keep our thoughts private; freedom from indoctrination or manipulation of our thoughts; and a prohibition on penalising a person for their inner thoughts.
Without freedom of thought, argued US supreme court Justice Felix Frankfurter, there can be no free society. Threats to the right to freedom of thought hence threaten democracy itself. It is therefore some comfort to know that this right is absolute. It is protected unconditionally, even in time of public emergency. If an interference or limitation is established, it will be unlawful and cannot be justified.
Manipulating our thoughts
When international human rights law frameworks were being developed in the 20th century, there was a general assumption that the mind was intrinsically free. As a result, little attention has been given to what the right to freedom of thought means in practice. However, the potential for tech companies to violate this right by accessing, manipulating or penalising our thoughts is a game-changer.
Tech has an ever-increasing ability to access our thoughts. In 2015, researchers reported that access to a user’s Facebook footprint allowed them more insight into their personality than their close friends and family. The idea behind this research was used by Cambridge Analytica to profile millions of Facebook users. The breaking of this story is likely to surprise many tech users, who may not have known quite how much information about their inner world can be deduced from what they reveal on social networking sites.
As machine-learning technology continues to progress, it is difficult, if not impossible, for users to know what information they are unwittingly revealing about their thoughts through their digital footprint. Furthermore, as we increasingly have to exchange our data for access to basic services, we may have little practical choice about whether or not to give access to our inner world.
Tech also actively aspires to break down the wall to our inner world. Last year Facebook announced plans for a “brain-computer interface” that would allow users’ thoughts to be decoded and transmitted straight to Facebook, without the need for typing or speaking.
Not only can our thoughts be accessed by tech, they can also be influenced by it. The targeting of tailored political advertisements at individuals by Cambridge Analytica, on the basis of imputations of what individual voters were particularly likely to respond to, is a good example of this.
More generally, concerns that tech manipulates the minds of its users have been voiced by a range of former tech-insiders, who have expressed concern at minds being “hijacked”. The now infamous Facebook newsfeed experiment has already shown the ability of tech to manipulate our inner world.
Tech also manipulates our minds by using insights from behavioural science to make platforms, apps and websites “sticky”; encouraging users to stay longer and return more frequently. Techniques used to do this include the use of variable ratio reward schedules, encouraging social reciprocity, allowing discernible increases in social status, and cultivating a fear of missing out.
This stickiness is showing the ability to shade into addiction. Internet gaming disorder is on the verge of being recognised as a psychiatric disorder and it appears likely that the concept of a social networking disorder may shortly follow. The term “user” is now not without irony.
The mere fact that tech influences our thoughts is not in itself problematic. Everything we interact with influences our thoughts. The fundamental question is where to draw the line between legitimate influence and unlawful manipulation of thought.
According to the precautionary principle, when human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that harm. Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the potential scope of “morally unacceptable harm”.
Human rights are “living instruments” that develop and evolve to reflect changes in society. In the 21st century, the assumption that no government or person can get into the “inner sanctum” of your mind is no longer justified. It is time to look carefully at the legal boundaries we need to protect freedom of thought, and by extension, our democracy, now and for generations to come.
Susie Alegre is a barrister with Doughty Street Chambers in London and Simon McCarthy-Jones is an associate professor in the department of psychiatry at TCD