Amnesty International has launched an Algorithmic Accountability toolkit to empower rights defenders, activists, and communities to investigate and hold accountable AI and automated decision-making systems (ADMs) that impact human rights. The toolkit draws on Amnesty’s investigations, campaigns, media work, and advocacy across countries including Denmark, Sweden, Serbia, France, India, the United Kingdom, the United States, the Occupied Palestinian Territory, and the Netherlands. It provides a practical guide for uncovering harms arising from algorithmic systems, particularly in public sector areas such as welfare, policing, healthcare, and education.
Despite claims by governments and corporations that AI systems improve efficiency or societal outcomes, Amnesty International highlights that these technologies often perpetuate bias, exclusion, and human rights abuses. The toolkit is designed to be adaptable for use by civil society organizations, journalists, impacted communities, and other groups seeking to investigate or challenge the use of algorithmic and AI systems in public services.
The toolkit offers a multi-pronged approach based on three years of Amnesty’s investigations and collaborations with key partners. It provides practical templates, research methods, and strategies for ending abusive AI systems through advocacy, campaigning, strategic communications, or litigation. A central principle of the toolkit is collaboration, emphasizing the agency of communities in driving accountability and ensuring that investigations are informed by those most affected by AI-related harms.
One highlighted case study is Amnesty’s investigation into Denmark’s welfare system, where AI-powered tools used by Udbetaling Danmark (UDK) were found to fuel mass surveillance and risk discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups. This investigation relied on partnerships with journalists, local civil society organizations, and impacted communities, demonstrating the importance of collective action in addressing algorithmic harms.
Human rights law is positioned at the center of the toolkit as a critical framework for accountability, filling a gap often overlooked in ethical AI and audit methodologies. Amnesty emphasizes the urgent need for vigilance, noting that unchecked state and corporate AI development poses risks to social protections, freedoms, and inclusion. The toolkit aims to democratize knowledge and equip investigators, civil society organizations, journalists, and communities to uncover harmful AI systems, demand accountability, and prevent abuses enabled by these technologies.







