Listen "Open Science and LLMs, with Dr. Valentina Pyatkin"
Episode Synopsis
Can open-source large language models really outperform closed ones like Claude 3.5? 🤔In this episode of the Women in AI Research podcast, Jekaterina Novikova and Malikeh Ehghaghi engage with Valentina Pyatkin, a postdoctoral researcher at the Allen Institute for AI. We dive deep into the future of open science, LLM research, and extending model capabilities.🔑 Topics we cover:Why open-source LLMs sometimes beat closed modelsThe value of releasing datasets, recipes, and training infrastructureThe role of open science in accelerating NLP innovationInsights from Valentina’s award-winning research journeyREFERENCES:Valentina's Google Scholar profile Olmo: Accelerating the science of language modelsTulu 3: Pushing Frontiers in Open Language Model Post-Trainingopen-instructGeneralizing Verifiable Instruction FollowingRewardBench 2: Advancing Reward Model Evaluation🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.WiAIR websiteLinkedInBlueskyX (Twitter)
More episodes of the podcast Women in AI Research (WiAIR)
Can We Trust AI Explanations? Dr. Ana Marasović on AI Trustworthiness, Explainability & Faithfulness
09/10/2025
Decentralized AI, with Wanru Zhao
25/06/2025
Robots with Empathy, with Dr. Angelica Lim
14/05/2025
Bias in AI, with Amanda Cercas Curry
03/04/2025
Limits of Transformers, with Nouha Dziri
12/03/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.