Protecting the vulnerable from AI harms



Applications that use artificial Intelligence are trained on large sets of data and often build on other systems. This means that any bias in the data can multiply across different AI applications and cause significant harms. A recent paper from SIENNA points to the legal and human rights implications of AI and calls for an agile approach, not just to AI development, but also to the laws that regulate technology. In her paper, Rowena Rodrigues issues a call to developers and legislators to pay attention to the impact of AI on vulnerable populations.

“Right now, AI is at the forefront of discussions. But a convergence of AI, robotics, internet-of-things and other new development will change this, and we will need new discussions about new unique dilemmas in law, and in society. We need continuous evaluation and risk assessment at the early stages of AI research and development,” says Rowena Rodrigues, deputy coordinator of SIENNA and senior research manager at Trilateral Research Ltd

SIENNA has produced reports on the state-of-the-art of AI and the associated legal and human rights challenges. In addition, a recent SIENNA publication in the Journal of Responsible Technology highlights these issues (related to the design and implementation and use of AI). The paper underlines the importance of paying attention and addressing early-on the potential impacts of AI technology. The disproportionate effects on vulnerable groups need due consideration too.

Several actors need to take action to reduce the adverse impacts of AI on vulnerable communities. AI researchers, funders, developers, deployers  , users should get involved in continuous risk identification, evaluation and mitigation. Policy-makers and regulators at the international, EU and national levels can support the development and capacity-building for resilience to reduce the negative effects of AI in vulnerable communities and address root causes of vulnerability.

By Anna Holm

Rodrigues R, Legal and human rights issues of AI: gaps, challenges and vulnerabilities, Journal of Responsible Technologies, Vol 4, 2020





Project structure


Disclaimer: This website and its contents reflects only SIENNA's view. The Commission is not responsible for any use that may be made of the information it contains.