Listen "HS094: How Risky Is Your Organization’s AI Strategy?"
Episode Synopsis
AI Large Language Models (LLMs) can be used to generate output that the creators and users of those models didn’t intend; for example, harassment, instructions on how to make a bomb, or facilitating cybercrime. Researchers have created the HarmBench framework to measure how easily an AI can be weaponized. Recently these researchers trumpeted the finding... »
More episodes of the podcast The Everything Feed - All Packet Pushers Pods
IPB190: IPv6 in Kubernetes Deployments
18/12/2025
N4N045: Audience Follow Up & 2026 Preview
18/12/2025
PP091: News Roundup–Securing MCP, Hunting Backdoors, and Getting the Creeps From AI Kids’ Toys
16/12/2025
TNO052: Internet History with Len Bosack
12/12/2025
HN808: Is IT a Young Person’s Game?
12/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.