Asimov had the three laws in his fictional robots stories to protect humans and robots, but what about in non-fiction?

Asilomar Conference on Beneficial AI

The Asilomar Conference was organised by the Future of Life Institute and took place in January 2017 at the Asilomar Conference Grounds (California), with many researchers, experts and thought leaders coming together to discuss the principles of beneficial AI. The group created 23 principles which are known as the Asilomar AI Principles.

Asilomar AI Principles

The 23 principles can be grouped into three categories. I’ve summarised the principles, and provide a link the full version in the signatories section.

Research Issues

Research Goal, Research Funding, Science-Policy Link, Research Culture, Race Avoidance.

AI research should not create undirected intelligence but should create beneficial intelligence. Investments in AI should be accompanied by funding for research the ensures AI is beneficial. AI researchers and policy makers should have constructive exchanges with a culture of cooperation, trust and transparency. AI developers should actively cooperate and not cut corners on safety standards.

Ethics and Values

Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion, AI Arms Race.

AI systems should be safe and secure. Systems should be transparent if they fail, and judicial decision making systems should auditable by a competent human authority. AI designers and builders are moral stakeholders in the implications of their AI systems use. AI goals should be designed to align with human values. AI systems should be designed to be human compatible. People have the right to access/manage/control their data. Humans should choose how and if delegation to AI systems occurs. An AI arms race into lethal autonomous weapons should be avoided.

Longer Term Issues

Capability Caution, Importance, Risks, Recursive Self-Improvement, Common Good.

Strong assumptions around the upper limits of future AI capabilities should be avoided when there is no consensus. Advanced AI could have a major impact, including existential risk, on the history of life on Earth and requires planning for / mitigating. AI systems that may recursively self-improve or self-replicate must be subject to strict safety and control measures. AI super intelligence should only be developed in the service of widely shared ethical ideals that benefits of all humanity.

Signatories

The principles were published August 2017. They can be viewed, and signed (if you want to) at: https://futureoflife.org/open-letter/ai-principles/

At the time of writing (June 2025) the principles have been signed by over 5000 people. These include Demis Hassabis , Ilya Sutskever, Stuart Russell, Stephen Hawking, Sam Altman, Elon Musk and Donald Knuth.