LLM Hallucinations and Machine Unlearning, with Dr. Abhilasha Ravichander

06/08/2025 1h 3min Episodio 8

Listen "LLM Hallucinations and Machine Unlearning, with Dr. Abhilasha Ravichander"

Episode Synopsis

In this episode of the Women in AI Research Podcast, hosts Jekaterina Novikova and Malikeh Ehghaghi engage with Abhilasha Ravichander to discuss the complexities of LLM hallucinations, the development of factuality benchmarks, and the importance of data transparency and machine unlearning in AI. The conversation also delves into personal experiences in academia and the future directions of research in responsible AI.REFERENCES:Abhilasha Ravichander -- Google Scholar profileWildHallucinations: Evaluating Long-form Factuality in LLMs with Real-World Entity QueriesHALoGEN: Fantastic LLM Hallucinations and Where to Find ThemFActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text GenerationWhat's In My Big Data?Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language ModelsRESTOR: Knowledge Recovery in Machine UnlearningModel State Arithmetic for Machine Unlearning🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.WiAIR websiteFollow us at:LinkedInBlueskyX (Twitter)#LLMHallucinations #FactualityBenchmarks #MachineUnlearning #DataTransparency #ModelMemorization #ResponsibleAI #GenerativeAI #NLPResearch #WomenInAI #AIResearch #WiAIR #wiairpodcast