Listen "Meta's Llama 3 ⏰ // Intel's Enterprise GenAI 💥 // Microsoft's Direct Nash Optimization for LMs 🚀"
Episode Synopsis
Meta is launching its Llama 3 open source LLM with 140 billion parameters, catching up to OpenAI's ChatGPT.
Intel's Gaudi 3 AI accelerator is breaking down proprietary walls to bring choice to enterprise GenAI market, with OEMs like Dell and Lenovo adopting it.
Microsoft Research's Direct Nash Optimization (DNO) is a new approach to improving language models, achieving state-of-the-art win-rates against GPT-4-Turbo.
UniFL is a unified framework that uses feedback learning to enhance diffusion models, improving both the quality of generated models and their acceleration.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:47 Meta confirms that its Llama 3 open source LLM is coming in the next month
03:17 Intel Breaks Down Proprietary Walls to Bring Choice to Enterprise GenAI Market
05:18 QCon London: Meta Used Monolithic Architecture to Ship Threads in Only Five Months
06:42 Fake sponsor
09:10 Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
10:48 UniFL: Improve Stable Diffusion via Unified Feedback Learning
12:24 MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
14:13 Outro
Intel's Gaudi 3 AI accelerator is breaking down proprietary walls to bring choice to enterprise GenAI market, with OEMs like Dell and Lenovo adopting it.
Microsoft Research's Direct Nash Optimization (DNO) is a new approach to improving language models, achieving state-of-the-art win-rates against GPT-4-Turbo.
UniFL is a unified framework that uses feedback learning to enhance diffusion models, improving both the quality of generated models and their acceleration.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:47 Meta confirms that its Llama 3 open source LLM is coming in the next month
03:17 Intel Breaks Down Proprietary Walls to Bring Choice to Enterprise GenAI Market
05:18 QCon London: Meta Used Monolithic Architecture to Ship Threads in Only Five Months
06:42 Fake sponsor
09:10 Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
10:48 UniFL: Improve Stable Diffusion via Unified Feedback Learning
12:24 MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies
14:13 Outro
More episodes of the podcast GPT Reviews
OpenAI's 'Strawberry' AI 🚀 // World's Fastest AI Inference ⚡ // Photo-realistic 3D Avatars 🎨
28/08/2024
Grok-2's Speed & Accuracy 🚀 // OpenAI's Transparency Push 🗳️ // LlamaDuo for Local LLMs 🔄
27/08/2024
Amazon Cloud Chief Spicy Takes 🚀 // Zuckerberg's AI Vision 📈 // Multimodal Models for Safety 🔒
23/08/2024
Grok-2 Beta Release 🚀 // Apple's $1,000 Home Robot 🏡 // ChemVLM Breakthrough in Chemistry 🔬
15/08/2024
Gemini Live AI Assistant 📱 // OpenAI’s Coding Benchmark ✅ // LongWriter’s 10K Word Generation ✍️
14/08/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.