Formulating values for AI is hard when humans do not agree
China and the West prioritise different things in algorithms
China may soon lead the world in applying AI in many fields. Chinese apps, devices and robots that embody these principles are increasingly being used around the world.
Computer algorithms encoded with human values will increasingly determine the jobs we land, the romantic matches we make, the bank loans we receive and the people we kill, intentionally with military drones or accidentally with self-driving cars.
How we embed those human values into code will be one of the most important forces shaping our century. Yet no one has agreed what those values should be. Still more unnerving is that this debate now risks becoming entangled in geo-technological rivalry between the United States and China.
During the past two centuries, the West has enjoyed a near-monopoly in the global values-uploading business. It has embedded its norms in international treaties and institutions. But in the digital realm it now faces a formidable rival in China, which is fast emerging as an artificial intelligence (AI) superpower determined to set its own rules.
Just how far China’s values differ from those of the West was highlighted during an AI ethics seminar at the Nuffield Foundation in London this month.
To date, some 50 sets of AI principles have been published around the world by companies, governments and private institutes. Among them are China’s biggest tech companies, including Tencent and Baidu. In May, a government-backed academy issued the Beijing AI Principles.
Codes of principles written in the West tend to focus on fairness, transparency, individual rights, privacy and accountability. But Song Bing, director of the Berggruen Institute China Centre, argued at the seminar that this jars with Chinese sensibilities. “These values are mostly western in origin. That does not mean that there is no resonance in China and the rest of the world. But are they the right set of rules for a global normative framework?” she asked.
Ms Song said that Chinese AI ethicists prioritise values that are open, inclusive and adaptive, speak to the totality of humanity and reject zero-sum competition. Summarising this philosophy, she told the seminar said that they add up to “great compassion and deep harmony”. Collective good is just as important as individual rights.
However, Liu Zhe, a philosopher from Peking University, said it would be wrong to believe that there was any one Chinese value system, mixing as it does elements of Confucianism, Daoism and Buddhism. That range of values would militate against a universal approach to AI within China, let alone elsewhere.
Zeng Yi of the Chinese Academy of Sciences in Beijing also questioned the need for a global set of principles. “They should not compete with each other, but complete each other to provide a global landscape for AI,” he said.
He even asked whether the attempt to “humanise AI” made sense, given that some Chinese researchers consider humans to be “the worst animals in the world”. Could robots not operate to a higher standard of ethics than humans? This talk of de-anthropocentrism, as it has been called, alarmed the western participants in the seminar, who argued it was a false and dangerous promise.
These views matter because, as Kai-Fu Lee has argued in his book AI Superpowers, China may soon lead the world in applying AI in many fields. Chinese apps, devices and robots that embody these principles are increasingly being used around the world.
China has adopted what Mr Lee calls a “techno-utilitarian” approach, emphasising the greatest good for the greatest number rather than a moral imperative to protect individual rights. That, he suggests, is one reason why Chinese consumers are less concerned about installing facial recognition devices in supermarket trolleys to personalise shopping trips, or in classrooms to spot inattentive students. China makes a different trade-off between surveillance and convenience than the West.
Critics warn that such arguments can be used to defend what should be the indefensible use of surveillance technologies to suppress dissent in Xinjiang and elsewhere. Moreover, the use of private data and AI by Chinese organisations does not just affect Chinese citizens. US officials have recently raised security and privacy concerns about the Chinese ownership of Grindr, the dating app for the gay community that has 4.5 million active daily users.
Given differing cultural traditions, philosophers could spend many lifetimes debating a set of universal AI principles. Should our healthcare app contain a mini-Confucius or a mini-Kant? But at a practical level, we need some minimum agreement on a global level. Basic international frameworks outlining the acceptable use of AI in cyberwarfare and robotics are a good place to start. – Copyright The Financial Times Limited 2019