Listen "LLM Hallucinations and Machine Unlearning, with Dr. Abhilasha Ravichander"
Episode Synopsis
In this episode of the Women in AI Research Podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi engage with Abhilasha Ravichander to discuss the complexities of LLM hallucinations, the development of factuality benchmarks, and the importance of data transparency and machine unlearning in AI. The conversation also delves into personal experiences in academia and the future directions of research in responsible AI.REFERENCES:Abhilasha Ravichander -- Google Scholar profileWildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity QueriesHALoGEN: Fantastic LLM Hallucinations and Where to Find ThemFActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text GenerationWhat's In My Big Data?Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language ModelsRESTOR: Knowledge Recovery in Machine UnlearningModel State Arithmetic for Machine Unlearning🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.WiAIR websiteFollow us at:LinkedInBlueskyX (Twitter)#LLMHallucinations #FactualityBenchmarks #MachineUnlearning #DataTransparency #ModelMemorization #ResponsibleAI #GenerativeAI #NLPResearch #WomenInAI #AIResearch #WiAIR #wiairpodcast
More episodes of the podcast Women in AI Research (WiAIR)
Can We Trust AI Explanations? Dr. Ana Marasović on AI Trustworthiness, Explainability & Faithfulness
09/10/2025
Decentralized AI, with Wanru Zhao
25/06/2025
Robots with Empathy, with Dr. Angelica Lim
14/05/2025
Bias in AI, with Amanda Cercas Curry
03/04/2025
Limits of Transformers, with Nouha Dziri
12/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.