Listen "Using LLMs to Evaluate Code"
Episode Synopsis
Finding and fixing weaknesses and vulnerabilities in source code has been an ongoing challenge. There is a lot of excitement about the ability of large language models (LLMs, e.g., GenAI) to produce and evaluate programs. One question related to this ability is: Do these systems help in practice? We ran experiments with various LLMs to see if they could correctly identify problems with source code or determine that there were no problems. This webcast will provide background on our methods and a summary of our results. What Will Attendees Learn? • how well LLMs can evaluate source code • evolution of capability as new LLMs are released • how to address potential gaps in capability
More episodes of the podcast Software Engineering Institute (SEI) Webcast Series
5 Essential Questions for Implementing the Software Acquisition Pathway and the Tools to Tackle Them
23/10/2025
Q-Day Countdown: Are You Prepared?
15/10/2025
Identifying AI Talent for the DoD Workforce
18/07/2025
Model Your Way to Better Cybersecurity
10/07/2025
DevSecOps: See, Use, Succeed
27/06/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.