HS094: How Risky Is Your Organization’s AI Strategy?

11/02/2025 24 min
HS094: How Risky Is Your Organization’s AI Strategy?

Listen "HS094: How Risky Is Your Organization’s AI Strategy?"

Episode Synopsis

AI Large Language Models (LLMs) can be used to generate output that the creators and users of those models didn’t intend; for example, harassment, instructions on how to make a bomb, or facilitating cybercrime. Researchers have created the HarmBench framework to measure how easily an AI can be weaponized. Recently these researchers trumpeted the finding... »

More episodes of the podcast The Everything Feed - All Packet Pushers Pods