Cambridge University Press & Assessment has unveiled a new framework to guide the ethical use of artificial intelligence in English language learning and assessment, releasing a paper that identifies six key principles.
The move comes amid growing public concern over AI in education, with a recent YouGov poll showing that 39% of respondents fear increased cheating and a failure to test appropriate language skills.
At the heart of Cambridge’s recommendations is a human-centred approach to AI, ensuring technology supports rather than replaces teachers and examiners. The principles also stress fairness, inclusivity, privacy, transparency, and sustainability.
Dr Nick Saville, Director of Thought Leadership at Cambridge and co-author of the paper, said the rapid adoption of AI offers “significant benefits for learners, teachers and institutions around the world, but it’s critical that it’s delivered ethically.” He warned that without an ethical framework, AI risks losing credibility and trust.
Francesca Woodward, Global Managing Director, English, added that AI “offers a world of possibilities, but with that comes a responsibility to make sure solutions are ethical, high-quality, and accessible.”
The paper, Ethical AI for Language Learning and Assessment, calls for robust evidence that AI meets the same standards as human examiners, greater transparency in how AI is used in assessments, and recognition of the environmental impact of large-scale AI systems.
Cambridge’s six principles emphasise: matching human standards, ensuring fairness, safeguarding data, providing transparency, keeping humans central to learning, and addressing sustainability.
The framework aims to provide the sector with a research-backed benchmark, urging educators, test providers, and policymakers to embed integrity and accountability into AI-driven solutions.
The paper can be seen here.







