Listen "98 - Foundations of Large Language Models ( Tong Xiao and Jingbo Zhu)"
Episode Synopsis
Click here to .This podcast is based on the paper "Foundations of Large Language Models" by Tong Xiao and Jingbo Zhu.It offers a comprehensive exploration of Large Language Models (LLMs), beginning with an examination of pre-training methods in Natural Language Processing, including both supervised and self-supervised approaches like masked language modeling, and using models like BERT. It then transitions to a detailed discussion of LLMs, covering their architecture, training challenges, and the critical concept of alignment with human preferences through techniques like Supervised Fine-tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). A significant portion of the podcast focuses on LLM inference, explaining fundamental algorithms such as prefilling and decoding, and various methods for improving efficiency and scalability, including prompt engineering and advanced search strategies. The podcast also touches on crucial considerations like bias in training data, privacy concerns, and the emergent abilities and scaling laws that govern LLM performance.
More episodes of the podcast AI Coach - Anil Nathoo
101 - Why Language Models Hallucinate?
08/09/2025
99 - Swarm Intelligence for AI Governance
04/09/2025
95 - Infosys Agentic AI Playbook
03/09/2025
97 - AI Agents Versus Agentic AI
31/08/2025
96 - Synergy Multi-Agent Systems
30/08/2025
93 - AI Maturity Index 2025
28/08/2025
92 - Thomson Reuters - Agentic AI Guide
27/08/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.