AI & Robotics: Codes and guidelines
The SIENNA project conducted a survey of codes and guidelines for artificial intelligence and robotics. The survey was submitted to the European Commission in 2018, and lists a large body of codes and guidelines. Here, you will find a link to our full report, and a selection of guidelines we consider important to be aware of.
The list below builds on Tambornino L, Lanzerath D et al. D4.3 Survey of REC approaches and codes for Artificial Intelligence and Robotics, 2018, which is a public deliverable report from the SIENNA project. You will find international codes and guidelines at the top of the page, and relevant national documents at the bottom.
You can download our full report on research ethics committee approaches for AI and robotics from Zenodol. For a comprehensive list of further guidelines please visit Algorithm Watch.
International codes and guidelines
AI HLEG Ethics Guidelines for Trustworthy AI
published by the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) in April 2019
The AI HLEG guidelines were produced by an independent expert group set up by the European Commission. They guidelines went through a process of revision with stakeholders before final publication. They propose seven key requirements based on fundamental rights and ethical principles for AI systems to be trustworthy: Human agency and oversight; Technical robustness and safety; Privacy and Data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; Accountability.
Link: AI HLEG Ethics Guidelines for Trustworthy AI.
SHERPA Guidelines for Use and Developments of AI or big data system
published by the EU funded SHERPA project in 2019
These guidelines are twofold: they include Guidelines for the ethical Use of AI or big data system and Guidelines for the ethical development of AI or big data system. These derive from a process of consultation with a variety of stakeholders and inspired by already existing guidelines, notably the AI HLEG Ethics Guidelines. They are operational guidelines and are intended to be practically implemented by organisations and researchers.
Link: SHERPA Guidelines for Use and Developments of AI or big data system
ACM Code of Ethics and Professional Conduct
published by the Association for Computing Machinery (ACM) in 1992 (updated 2018)
Widely recognised code of ethics in information technology; covers many of the key ethical areas that are encountered in information technology practice. Section 1 outlines fundamental ethical principles that form the basis for the remainder of the Code. Section 2 addresses additional, more specific considerations of professional responsibility. Section 3 guides individuals who have a leadership role, whether in the workplace or in a volunteer professional capacity. Commitment to ethical conduct is required of every ACM member, and principles involving compliance with the Code are given in Section 4.
Link: ACM Code of Ethics and Professional Conduct
IEEE Code of Ethics
published by the IEEE (revised 2017)
Commitment of the IEEE member to the highest ethical and professional conduct and agreement on ten aspects regarding to technology: safety, health, and welfare of the public; Ethical design and sustainable development practices; Conflicts of interest; Honesty; Rejection of bribery; Societal implications; Full disclosure of pertinent limitations; Scientific and research integrity aspects; Fairness and non-discrimination; Avoidance of injury; Assisting colleagues and co-workers.
Link: IEEE Code of Ethics
Asilomar AI Principles
published by the Future of Life Institute in conjunction with the 2017 Asilomar conference
This list includes 23 principles aimed at ensuring an ethical development of AI. These principles emerged from the 2017 Asilomar conference attended by a wide range of stakeholders. They touch on Research Issues, Ethics and Values, and Longer-Term Issues. The list of principles is signed by numerous AI and robotics researchers and other experts and stakeholders (respectively 1668 and 3655 at the time of writing).
Link: Asilomar AI Principles
The Montreal Declaration for a Responsible Development of Artificial Intelligence
published by the Forum on the Socially Responsible Development of AI in 2017
This declaration seeks to “spark public debate and encourage a progressive and inclusive orientation to the development of AI.” The values were proposed by a group of ethics, law, public policy and artificial intelligence experts and informed by a deliberation process that included consultations held over three months, in 15 different public spaces, and exchanges between over 500 citizens, experts and stakeholders.
Link: The Montreal Declaration for a Responsible Development of Artificial Intelligence
Top 10 Principles For Ethical Artificial Intelligence
published by the UNI Global Union in 2017
This document offers 10 principles and specific points of action, which unions, shop stewards and global alliances must implement in collective agreements, global framework agreements and multinational alliances.
Link: Top 10 Principles For Ethical Artificial Intelligence
Barcelona Declaration for the proper development and usage of artificial intelligence in Europe
published by International experts in artificial intelligence at the B·Debate session, an initiative of Biocat and the “la Caixa” Foundation in 2017
The Declaration recognises that “AI can be a force for the good of society, but that there is also concern for inappropriate, premature or malicious use so as to warrant the need for raising awareness of the limitations of AI and for collective action to ensure that AI is indeed used for the common good in safe, reliable, and accountable ways.”
Link: Barcelona Declaration for the proper development and usage of artificial intelligence in Europe
Humanitarian UAV Code of Conduct & Guidelines
published by the Humanitarian UAV Network (UAViators) in 2014
The code covers a variety of ethical aspects. It is a widely contributed to and recognised code. The Code was revised by more than 60 organizations through multiple open, multi-stakeholder consultations over the course of two years. In 2015, dedicated guidelines were added to the Code of Conduct to provide further guidance on Data Protection, Community Engagement, Effective Partnerships and Conflict Sensitivity.
Link: Humanitarian UAV Code of Conduct & Guidelines
Mission Statement and Berlin Statement
published by the International Committee for Robot Arms Control (ICRAC) in 2009
The code covers long-term risks posed by the proliferation and further development of these weapon systems. The Mission Statement calls for a discussion about an arms control regime to reduce the threat posed by these systems. The Berlin Statement calls for arms control regime to regulate the development, acquisition, deployment, and use of armed tele-operated and autonomous robotic weapons and makes recommendations on what the regime should prohibit.
Link: Mission Statement and Berlin Statement
Software Engineering Code of Ethics
published by the IEEE-CS/ACM joint task force on Software Engineering Ethics and Professional Practices (SEEPP) in 1999
The Code contains eight Principles related to the behaviour of and decisions made by professional software engineers, including practitioners, educators, managers, supervisors and policy makers, as well as trainees and students of the profession. The Principles identify the ethically responsible relationships in which individuals, groups, and organizations participate and the primary obligations within these relationships. The Clauses of each Principle are illustrations of some of the obligations included in these relationships.
Link: Software Engineering Code of Ethics
Code of Ethics
published by The International Council on Systems Engineering (INCOSE) (undated)
This Code is concerned with how certain fundamental imperatives apply to one's conduct as an engineering professional. These imperatives are expressed in a general form to emphasize that principles which apply to engineering ethics are derived from more general ethical principles.
Link: Code of Ethics
Opinion 3/2018 on online manipulation and personal data
published by the European Data Protection Supervisor (EDPS) in 2018
It specifically covers AI, the unethical use of personal information and data processing, and the fundamental rights and values at stake. It covers how personal data is used to determine the online experience, the digital (mis)information ecosystem, fundamental rights and values at stake, relevant legal frameworks and includes recommendations.
Link: Opinion 3/2018 on online manipulation and personal data
Report of COMEST on robotics ethics
published by the COMEST Working Group on Robot Ethics in 2017
The report aims to raise awareness and promote public consideration and inclusive dialogue on ethical issues concerning the different use of contemporary robotic technologies in society. The report proposes a technology-based ethical framework to consider recommendations on robotics ethics based on the distinction between deterministic and cognitive robots. It further identifies ethical values and principles that can be helpful to set regulations at every level and in a coherent manner, from engineers’ codes of conduct to national laws and international conventions.
Link: Report of COMEST on robotics ethics
Opinion 9/2016 EDPS Opinion on Personal Information Management Systems
published by the European Data Protection Supervisor (EDPS) in 2016
The document can help contribute to a sustainable and ethical use of big data and to the effective implementation of the principles of the GDPR. The Opinion analyses how PIMS can contribute to a better protection of personal data and what challenges they face; it identifies ways forward to build upon the opportunities they offer and draws some conclusions and next steps.
Link: Opinion 9/2016 EDPS Opinion on Personal Information Management Systems
Opinion 4/2015 Towards a new digital ethics: Data, Dignity and Technology
published by the European Data Protection Supervisor (EDPS) in 2015
The Opinion addresses ethical issues pertaining to big data, Internet of things, ambient computing, cloud computing, drones, autonomous vehicles. It outlines a four-tier ‘big data protection ecosystem’ to respond to the digital challenge: a collective effort, underpinned by ethical considerations. (1) Future-oriented regulation of data processing and respect for the rights to privacy and to data protection (2) Accountable controllers who determine personal information processing (3) Privacy conscious engineering and design of data processing products and services. (4) Empowered individuals.
Link: Opinion 4/2015 Towards a new digital ethics: Data, Dignity and Technology
National codes and guidelines
Ethics of research in robotics
published by the Research Ethics Board of Allistene, the Digital Sciences and Technologies Alliance (CERNA) in 2014 (France)
This document seeks to cover all ethical issues that robotics imply. It introduces them by presenting the context through a focus on “robots in the society” (Ch 3). CERNA’s recommendations fall into four categories: a general one; autonomy and decisional capacities; imitation of life and affective and social interaction with human beings; reparation and augmentation of the human by the machine.
It is an important document for France as it is developed by an important group bringing together a number of French research institutions. It is one of the rare documents engaging with such precision with robotics ethics. It proposes very precise and thoughtful ethical recommendations on this technology.
Link: Ethics of research in robotics
Research Ethics in Machine Learning
published by the Research Ethics Board of Allistene, the Digital Sciences and Technologies Alliance (CERNA) in 2017 (France)
Ethical issues related to machine learning are described, especially in context of chatbots, autonomous vehicles and robots that interact with people and groups. “For any digital system, the aim should be to embody the properties described in III.1. However, machine learning systems possess certain specificities, described in III.2, which come into conflict with those general properties.” (p. 17)
It engages with great precision in ethical issues related to machine learning and provides detailed and informed recommendations.
Automated and connected driving
published by the Ethics Commission on Automated Driving in 2017 (Germany)
In this code ethical issues related to autonomous driving cars are discussed. The Ethics Commission's report comprises 20 propositions. It defines key elements for automated driving (e.g. the ethical imperative).
The Ethics Commission on Automated and Connected Driving has developed initial guidelines for policymakers and lawmakers that will make it possible to approve automated driving systems but that set out special requirements in terms of safety, human dignity, personal freedom of choice and data autonomy.
Link: Automated and connected driving
Robots and surveillance in the care of older - ethical aspects
published by the Swedish National Council on Medical Ethics in 2014 (Sweden)
“The aim of the report is to encourage public debate and provide support ahead of decisions on the use of robots and monitoring in health and medical care, and care provided by social services, to elderly people.” (p. 1). Questions about what good quality good care and quality care means are discussed as well as fair distribution of resources; society's interest; self-determination; privacy.
Important ethical issues are highlighted, and recommendations are made by the Council regarding health robots and monitoring.
Link: Robots and surveillance in the care of older - ethical aspects
Human Rights in the Robot Age: Challenges Arising from the Use of Robotics, Artificial Intelligence, and Virtual and Augmented Reality
published by the Rathenau Institute in 2017 (Netherlands)
The report outlines ethical issues caused by novel AI&R technologies in relation to various human rights and ethical issues. The document offers a number of recommendations in terms of policy steps for the ethical issues related to these rights. Its focuses on the impact of various AI&R technologies on human rights and its argument for two novel human rights are the right to not be measured, analysed or coached, and the right to meaningful human contact.
Statement of Ethical Principles
published by the Engineering Council and the Royal Academy of Engineering in 2005 (revised 2017) (UK)
The document sets out four fundamental principles for ethical behaviour and decision-making:
Honesty and integrity; Respect for life, law, the environment and public good; Accuracy and rigour; Leadership and communication. The four principles are supported by examples of how each should be applied. This document is a good example of a well-recognised and accepted Code.
Link: Statement of Ethical Principles.
Code of Conduct For BCS Members/BCS Code of Conduct
published by the British Computer Society (BCS) in 2015 (UK)
The Code sets out the professional standards required by BCS as a condition of membership and applies to all members, irrespective of their membership grade, the role they fulfil, or the jurisdiction where they are employed or discharge their contractual obligations. Topics are public interest, professional competence and integrity, duty to relevant authority, and duty to profession. The code prescribes professional standards for IT (information technology) professionals.
Link: Code of Conduct For BCS Members/BCS Code of Conduct.
The National Artificial Intelligence Research and Development strategic plan (NAIRDSP)
developed by the National Science and Technology Council in 2016 (USA)
In the document machine and deep learning, and a variety of AI implementations, including image recognition and language processing are discussed. Ethical issues regarding design and implementation of AI systems are addressed. Furthermore: research aimed at understanding ethical implications; fairness, transparency and accountability by design; public safety.
See especially “Strategy 3: Understand and Address the Ethical, Legal, and Societal Implications of AI“ (pp.26, 27). This is a document created by an advisory body of the government aimed at a governmental audience. Its goal is to define a high-level framework that can be used to identify scientific and technological needs in Al.
Link: The National Artificial Intelligence Research and Development strategic plan (NAIRDSP)