What a well-regulated AI and robotics world would look like



In the SIENNA Final Conference on 11 March 2021, we facilitated a session on AI and robotics: regulatory and policy recommendatiions. The session was led by Rowena Rodrigues form Trilateral Research and covered SIENNA legal analysis work for AI and robotics and our recommendations. SIENNA’s objective in its legal analysis work and recommendations is to support and ensure ethical and human-rights respectful ​design, development, deployment and use ​of AI and robotics technologies​. During the session, we asked our audience to share their views live with us on two questions. Curious about the results?

So what are the biggest challenges for regulating AI and Robotics? In addition to the ones outlined in our report on legal and human rights requirements for AI and robotics in and outside the EU, 35 final conference participants shared their thoughts with us, echoing our findings, creating a word cloud where conceptual clarity, economic interests, enforcement, timing, responsibility, speed or research, misuse of ethics, self-regulation, dual use and the international market are at the centre.

(Image removed)

According to another poll of26 final conference participants, a well-regulated AI and robotics world is one where…

  • It’s not a concern anymore
  • Regulation frames practice and not the other way around
  • Human Rights is respected
  • There is a balance between technology and the rule of law
  • A democratically legitimated compromise allows effective and good use of AI&R while reducing the risk of adverse use
  • AI is controlled and people are free to use/not use and educated on both aspects, empowered and engaged in fundamental issues to help political decision-making
  • Human control is preserved - people are in control
  • The tools serve the people
  • AI is not the solution for everything, in every place, at every point in time, but where it is used when sensible and useful.
  • AI works for the benefit of people
  • All people benefit equally of AI and harms are mitigated and not unfairly distributed
  • Optimal interest is balanced
  • Limits are set
  • An ethics by design approach is being followed
  • Science and ethics talk to each other
  • Analysts and developers think about societal impact of applications
  • There is stakeholder engagement
  • The politics and ethics are tightly (philosophically and metaphysically) integrated
  • Asymmetries of power are resolved
  • We have proven mechanisms against power abuses and malicious uses
  • Companies are accountable
  • Clear measures for accountability enforced by independent, transparent and competent authorities
  • Human, not economic interests guide development and use
  • People are well-informed and educated about the technologies impact
  • There is no fear of curtailing innovation

If you missed our session, don’t worry! We will publish a recording on the SIENNA YouTube channel soon. For more details, see our reports:

You can also read more in our policy briefs:

By Rowena Rodrigues, Trilateral Research

Want to know more about what we discussed? Check out our final conference programme (Link removed) !





Project structure


Disclaimer: This website and its contents reflects only SIENNA's view. The Commission is not responsible for any use that may be made of the information it contains.