Stephen Hawking and Elon Musk endorse new AI code

Web Log: Principles state goal of AI research should be to create ‘beneficial intelligence’

These 23 principles were agreed upon at the recent Beneficial AI 2017 conference at the Future of Life Institute in Boston

These 23 principles were agreed upon at the recent Beneficial AI 2017 conference at the Future of Life Institute in Boston

 

A list of 23 principles related to the ethics, research strategies and long-term issues arising from developing artificial intelligence technologies has been endorsed by high-profile scientists and technologists including Stephen Hawking; Elon Musk; Ray “the Singularity” Kurzweil; Demis Hassabis, founder and CEO of DeepMind; and Prof Yann LeCun, director of AI research at Facebook.

Curiously, actor and film-maker Joseph Gordon-Levit has also signed this list – and actor-turned-science communicator Alan Alda is on the scientific advisory board.

Existential risks

These 23 principles were agreed upon at the recent Beneficial AI 2017 conference at the Future of Life Institute in Boston and state that the goal of AI research should be to create “beneficial intelligence” rather than undirected intelligence, adding that plans should be put in place in case AI systems pose catastrophic or existential risks to human life.

There was also a stipulation that strict safety and control measures are in place for self-improving and self-replicating AI systems; nobody wants swarms of determined, superintelligent nanobots deciding that planet Earth’s greatest threat (humans) should be eradicated.

https://futureoflife.org/ai-principles