Listen "Prompt Injection: When Hackers Befriend Your AI"
Episode Synopsis
This is a technical presentation where we'll look at attacks on implementations of Large Language Models (LLMs) used for chatbots, sentiment analysis, and similar applications. Serious prompt injection vulnerabilities can be used by adversaries to completely weaponize your AI against your users.We will look at how so-called "prompt injection" attacks occur, why they work, different variations like direct and indirect injections, and then see if we can find good solutions on how to mitigate those risks. We'll also learn how LLMs are "jailbroken" to ignore their alignment and produce dangerous content.LLMs are not brand new, but we know that their use will increase drastically in the next few years, and therefore it is important to take security seriously by considering the risks involved before using AI for sensitive operations.by: Vetle HjelleRef: https://www.youtube.com/watch?v=S5MKPtRpVpY
More episodes of the podcast Code Conversations
https://www.youtube.com/watch?v=CaZbsbKnOho&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=47
13/01/2026
Cybersecurity in the Era of AI
10/01/2026
ChatGPT and OpenAI API solutions
03/01/2026
Integrating Language Models into Web UIs
30/12/2025
Video Game AI for Business Applications
23/12/2025
Building specialized AI Copilots with RAG
19/12/2025
The Rise of the Design Engineer
16/12/2025
Cracking the Furby Code Evolving an Icon
12/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.