Listen "Why AI Detectors Don't Work for Education "
Episode Synopsis
In this episode of Ed-Technical, Libby and Owen explore why traditional AI detection tools are struggling in academic settings. As students adopt increasingly sophisticated methods to evade AI detection - like paraphrasing tools, hybrid writing, and sequential model use - detection accuracy drops and false positives rise. Libby and Owen look at the research showing why reliable detection with automated tools is so difficult, including why watermarking and statistical analysis often fail in real-world conditions. The conversation shifts toward process-based and live assessments, such as keystroke tracking and oral exams, which offer more dependable ways to evaluate student work. They also discuss the institutional challenges that prevent widespread adoption of these methods, like resource constraints and student resistance. Ultimately, they ask how the conversation about detection could lead towards more meaningful assessment. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
More episodes of the podcast EdTechnical
Assessment in Education: To AI or Not to AI?
14/08/2025
Is ChatGPT Rotting Your Brain?
17/07/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.