INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

04/03/2024 54 min Episodio 15
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)

Listen "INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)"

Episode Synopsis


Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans.00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - OutroLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.StakeOut.AIAI Governance Scorecard (go to Pg. 3)Pause AIRegulations.gov USCO StakeOut.AI Comment OMB StakeOut.AI Comment AI Treaty open letterTAISCAlpaca: A Strong, Replicable Instruction-Following ModelReferences on EU AI Act and Cedric O Tweet from Cedric O EU policymakers enter the last mile for Artificial Intelligence rulebook AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’ EU’s AI Act negotiations hit the brakes over foundation models The EU AI Act needs Foundation Model Regulation BigTech’s Efforts to Derail the AI Act Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulationDivide-and-Conquer Dynamics in AI-Driven Disempowerment