The rapid integration of artificial intelligence (AI) in court systems offers opportunities for greater efficiency and improved access to justice, yet it carries significant risks. Errors, bias, or overreliance on algorithmic outputs may undermine judicial independence and human rights. On 15 December 2025, experts including Margaret Satterthwaite, UN Special Rapporteur on the independence of judges and lawyers; Tawfik Jelassi, UNESCO Assistant Director General for Communication and Information; and former U.S. federal judge Katherine Forrest discussed these challenges at a roundtable moderated by Themba Mahleka from NYU.
The event, organized in partnership with the Permanent Mission of Brazil to the UN, highlighted UNESCO’s Guidelines for the Use of AI in Courts and Tribunals, supported by the European Union. These guidelines constitute the first global framework for ensuring that AI enhances judicial processes without compromising the rule of law, emphasizing the need for human oversight and accountability in all AI-assisted judicial activities.
Participants explored AI’s potential to democratize access to justice, citing Brazil’s fully electronic case management system as an example. Digital tools enable individuals to follow proceedings remotely and help overcome barriers such as language, location, and social discrimination. While AI can increase efficiency, Judge Forrest noted that “AI is trained on human data, and no one comes to court unbiased,” highlighting that technology alone cannot eliminate human judgment errors. The discussion underscored the importance of maintaining a balance between accessibility and fairness, stressing that everyone retains the right to a human judge and lawyer.
Speakers also addressed the risks AI poses to judicial independence. Private control over AI technology, subtle influence on judicial decisions, and external pressures to adopt unregulated tools all threaten impartiality. Margaret Satterthwaite emphasized the concentration of power in private AI providers and the need for low-tech alternatives or careful scrutiny of partnerships to safeguard judicial autonomy. Even supportive AI functions, such as summarizing documents, can inadvertently shape legal reasoning, and courts may face pressures that compromise decision-making priorities.
The roundtable highlighted UNESCO’s Guidelines as a framework for mitigating these risks, providing recommendations on how AI should be designed, procured, and used to strengthen human rights, access to justice, and judicial independence. The guidelines also stress the need for transparency, ensuring that citizens can understand and challenge AI-assisted judicial outcomes. UNESCO encourages Member States to develop national guidelines and training based on these principles, offering support through technical assistance, policy dialogue, and capacity-building initiatives to align judicial digital transformation with fundamental human rights.







