Listen "Using Gen AI on your code, what could possibly go wrong?"
Episode Synopsis
With GenAI, developers are shifting from traditional code reuse to generating new code snippets by prompting GenAI, leading to a significant change in the ways software gets developed.Several academic studies show that AI generated code based on LLM's that are trained on vulnerable OSS implementations lead to vulnerable generated code. Another study showed that developers tend to trust GenAI created code more than human created code. Combining that with the higher code velocity it will result in more vulnerabilities in it's output.Using an AI system that runs an LLM also has additional risks tied to it, related to jailbreaks, data poisoning and malicious agents, recursive learning and IP infringements.In this presentation, we will examine real-world data from several academic studies to understand how GenAI is changing software security, the risks it introduces, and possible strategies to address these emerging issues.Ref: https://www.youtube.com/watch?v=krDJlrw5mM0&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=44
More episodes of the podcast Code Conversations
ChatGPT and OpenAI API solutions
03/01/2026
Integrating Language Models into Web UIs
30/12/2025
Video Game AI for Business Applications
23/12/2025
Building specialized AI Copilots with RAG
19/12/2025
The Rise of the Design Engineer
16/12/2025
Cracking the Furby Code Evolving an Icon
12/12/2025
LLM Process Prompt to Prediction
05/12/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.