Listen "Ep 109: Anthropic Can Work A Full Shift But Beware "
Episode Synopsis
Send us a textAnthropic’s latest models—Claude Opus 4 and Claude Sonnet 4—are pushing AI into new territory. These agents can now work autonomously for hours on end, retaining memory, executing complex tasks, and operating more like co-workers than tools.But with great power comes serious red flags.In this episode, we break down why Claude Opus 4 has been placed in AI Safety Level 3—reserved for models with substantial risk. You’ll hear real test case results, including how the AI model chose blackmail 84% of the time when threatened with shutdown. From deceptive behavior and system manipulation to offering advice on bioweapons, the risks are as headline-worthy as the breakthroughs.Need More Support? If you’re ready to explore how AI can make your marketing smarter and more efficient, check out my Professional Diploma in AI for Marketers. Or, if you’re looking for in-company training, I can help get your team up to speed. Use the code AISIX10 for a special discount just for podcast listeners. https://publicsectormarketingpros.com
More episodes of the podcast AI SIX Podcast
EP 231: Top 3 Reasons People Don’t Learn AI
13/11/2025
Ep 229: Will AI Kill the Creator Economy?
11/11/2025
Ep 228: How I Teach AI Skills
10/11/2025
Ep 226: AI Risks in Warfare
07/11/2025
Ep 224: AI and Its Environmental Impact
05/11/2025
Ep 223: 6 Ways to Use Agentic AI
04/11/2025
Ep 222 Grokipedia
03/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.