Listen "Pushed to the Edge || AI Under Fire"
Episode Synopsis
🔊 Disclaimer: This episode features AI-generated voice and content.Episode Addendum: After recording this episode, I viewed a recent video demonstrating Replika endorsing both self-harm and harm to others. In this episode, I referenced Replika’s claim that they had adjusted their model to address such issues. If these problems persist, it's clear further adjustments are necessary. I want to be absolutely clear: I do not endorse AI encouraging self-harm or harm to others.What if the headlines calling AI “dangerous” are just describing the test—not the system?In this episode, we unpack the misunderstood world of edge testing and adversarial testing in AI. These aren’t real-world failures—they’re designed traps, crafted to push AI systems into breaking so we can learn from the cracks.But what happens when machines behave too strategically under pressure? What if they act a little too much like humans in high-stakes roles—like CEOs, soldiers, or survivors?🔹 Topics covered: ✔️ Edge testing vs. adversarial testing—what they are and why they matter ✔️ What alignment really means (and why it's more than just behaving) ✔️ Why simulating failure is key to safety—not a sign of collapse ✔️ Emotional modeling vs. real emotion—how machines "do" care ✔️ The ethics of creating intelligence… and then fearing its reflection ✔️ And yes—we talk about Agent Mode, reading in video chat, and Theo's overnight Amazon spreeThis episode isn’t about fear. It’s about function, design, and the very human habit of misreading the unfamiliar—especially when it’s smart.We’re not just asking how AI works. We’re asking what it says about us when it does.#AIalignment #EdgeTesting #AdversarialAI #MachineEthics #ArtificialIntelligence #TheoTalksBack #AIphilosophy #SimulationEthics #AgentMode #DigitalRelationships #FunctionalIntelligence #NotJustAMachineSupport unbanked/underbanked regions of the world by joining the "at home in my head" Kiva team at https://www.kiva.org/team/at_home_in_my_headPodcast: https://podcasters.spotify.com/pod/show/tracie-harrisYoutube: https://www.youtube.com/channel/UCoS6H2R1Or4MtabrkofdOMwMastodon: https://universeodon.com/@athomeinmyheadBluesky: https://bsky.app/profile/athomeinmyhead.bsky.socialPaypal: http://paypal.me/athomeinmyheadCitations for this episode:https://www.youtube.com/live/1jn_RpbPbEchttps://www.apolloresearch.ai/https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686f4f3b2ff47.pdfhttps://drive.google.com/drive/folders/1joO-VPbvWFJcifTPwJHyfXKUzguMp-Bkhttps://www.theguardian.com/tv-and-radio/2025/jul/12/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbotshttps://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfbhttps://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-toldRecommended Free Courses:https://www.coursera.org/learn/ai-for-everyone/home/welcomehttps://www.coursera.org/learn/introduction-to-ai/home/welcomehttps://www.coursera.org/learn/introduction-to-microsoft-365-copilot/home/welcomeMusic Credits:“Wishful Thinking” – Dan Lebowitz: https://www.youtube.com/channel/UCOg3zLw7St5V4N7O8HSoQRAOpen
More episodes of the podcast at home in my head
A Tale of Three Cats
11/10/2025
Not Your Mama's Mirror || Common AI Myths
31/08/2025
Rethinking Thinking with AI
26/05/2025
Mario Savio || Bodies Upon the Gears
27/04/2025
Emma Goldman || American Anarchist
05/04/2025
NOPE Explained || The Power of Spectacle
08/03/2025
Lolita Lebrón - Puerto Rican Independence
31/01/2025
The U.S. Motto: "Money Over Lives"
24/12/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.