In 2023, DSIT launched the Fairness Innovation Challenge in collaboration with Innovate UK, with support from UK regulators including the Information Commissioner’s Office and the Equality and Human Rights Commission. The challenge offered over £465,000 in funding to drive the development of innovative solutions to address bias and discrimination in AI systems. Despite growing attention to AI fairness, organisations continue to face significant hurdles in practice, including the difficulty of collecting demographic data due to ethical, regulatory, and practical constraints, challenges in defining and measuring fair outcomes, and the limitations of purely technical approaches, which risk violating UK equalities law.
The Challenge required applicants to focus on real-world AI use cases and adopt a socio-technical approach, addressing both statistical and structural biases. Four projects were funded across higher education, financial services, healthcare, and recruitment. The Open University examined fairness in AI-driven learning analytics in higher education, the Alan Turing Institute developed a fairness toolkit for financial sector SMEs using large language models, King’s College London addressed bias in AI early warning systems for cardiac arrest prediction, and Coefficient Systems Ltd focused on bias in automated CV screening tools.
The Open University’s FairAI4EDTech project highlighted the importance of combining technical and human-centered approaches. Building on the OUAnalyse system that supports over 200,000 students, the team developed a Framework for Responsible AI in Learning Analytics. The project found that fair use of predictive models requires institutions to define clear equity values, select fairness metrics aligned with those values, monitor fairness continuously, and recognize that fairness is dynamic across student groups. Student-focused dashboards were piloted to increase transparency and agency, enabling learners to act on insights collaboratively with tutors. Focus groups revealed that tutors’ reflective judgment is essential in safeguarding against uncritical reliance on AI predictions, underlining the importance of embedding AI tools within a framework of ethics, training, and professional autonomy.
In the financial sector, the Alan Turing Institute evaluated biases in FinBERT, a model analyzing financial text, across Global North and South datasets. While the model showed no significant disparities in controlled tests, real-world data exposed inconsistencies, including susceptibility to manipulation through positive statements or numerical values. To address this, the team developed FAID, a tool for proactive fairness monitoring, and a set of reusable design patterns promoting traceability, responsibility, explainability, auditability, and digestibility in AI development.
Coefficient Systems Ltd investigated bias in AI-driven CV screening, revealing racial and gender disparities in large language models. Using synthetically generated CVs, the team measured how algorithms ranked candidates for various roles and developed Fairground, an open-source toolkit to test recruitment AI systems for bias. The project demonstrates how socio-technical approaches, combining technical solutions with human oversight, can foster more equitable recruitment processes.
King’s College London focused on bias in the CogStack Foresight model, used to predict in-hospital cardiac arrest. Their neurosymbolic METHOD approach (Modular Efficient Transformer for Healthcare Outcome Delivery) improved performance across patients with varying clinical documentation and integrated clinical guidelines to address structural biases. Evaluation workshops showed approximately 90% agreement between clinicians and the improved model, highlighting its effectiveness and fairness.
Across all projects, key findings highlighted the importance of access to demographic data, careful selection of fairness metrics, and ongoing bias mitigation. Synthetic data emerged as a valuable tool in overcoming privacy and data access challenges, as demonstrated by Coefficient Systems and the Alan Turing Institute. The studies revealed that biases are pervasive in AI systems but can be detected and mitigated through multi-pronged, socio-technical approaches that combine technical innovation, ethical guidance, and human oversight.
Regulatory bodies played a crucial role in supporting responsible innovation. The Equality and Human Rights Commission emphasized the importance of understanding potential AI biases under the Equality Act and the Human Rights Act, focusing on accessibility, transparency, and explainability. The Information Commissioner’s Office stressed the importance of data protection by design, ensuring AI technologies comply with law while fostering public trust. Both regulators highlighted that early integration of ethical and privacy considerations is essential to build fair and trustworthy AI systems.
The Fairness Innovation Challenge demonstrated that meaningful progress in AI fairness is possible when technical innovation is aligned with ethical and regulatory principles. Lessons learned provide practical guidance for organisations seeking to make AI systems fairer, more transparent, and accountable. The findings set a strong foundation for future efforts to ensure AI benefits all members of society, supporting ethical innovation while safeguarding individual rights.







