Listen "Strengthening Resilience to AI Risk: A Guide for UK Policymakers"
Episode Synopsis
This report from the Centre for Emerging Technology and Security and the Centre for Long-Term Resilience identifies different levers as they apply to different stages of the AI life cycle. They split the AI lifecycle into three stages: design, training, and testing; deployment and usage; and longer-term deployment and diffusion. It also introduces a risk mitigation hierarchy to rank different approaches in decreasing preference, arguing that “policy interventions will be most effective if they intervene at the point in the lifecycle where risk first arises.” While this document is designed for UK policymakers, most of its findings are broadly applicable.Original text:https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdfAuthors:Ardi Janjeva, Nikhil Mulani, Rosamund Powell, Jess Whittlestone, and Shahar AviA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
More episodes of the podcast AI Safety Fundamentals
AI and Leviathan: Part I
29/09/2025
d/acc: One Year Later
19/09/2025
A Playbook for Securing AI Model Weights
18/09/2025
Resilience and Adaptation to Advanced AI
18/09/2025
Introduction to AI Control
18/09/2025
The Project: Situational Awareness
18/09/2025
The Intelligence Curse
18/09/2025