Listen "Google's Gemma 🌟 // Generalized Instruction Tuning 📚 // Multi-object Diffusion 🖼️"
Episode Synopsis
Gemma, a new family of lightweight, state-of-the-art open models built for responsible AI development, is introduced by Google.
"Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models" presents a new method for instruction tuning of Large Language Models (LLMs) called Generalized Instruction Tuning (GLAN).
"MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion" addresses the challenge of generating images of multiple objects with spatial relationships and attribute bindings.
"Instruction-tuned Language Models are Better Knowledge Learners" explores how to update factual knowledge in large language models.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:21 Google DeepMind Releases Gemma
03:28 Andrej Karpathy on Gemma's Tokenizer
04:16 Groq Inference Tokenomics: Speed, But At What Cost?
05:51 Fake sponsor
07:44 Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models
09:38 MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion
11:06 Instruction-tuned Language Models are Better Knowledge Learners
12:58 Outro
"Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models" presents a new method for instruction tuning of Large Language Models (LLMs) called Generalized Instruction Tuning (GLAN).
"MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion" addresses the challenge of generating images of multiple objects with spatial relationships and attribute bindings.
"Instruction-tuned Language Models are Better Knowledge Learners" explores how to update factual knowledge in large language models.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:21 Google DeepMind Releases Gemma
03:28 Andrej Karpathy on Gemma's Tokenizer
04:16 Groq Inference Tokenomics: Speed, But At What Cost?
05:51 Fake sponsor
07:44 Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models
09:38 MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion
11:06 Instruction-tuned Language Models are Better Knowledge Learners
12:58 Outro
More episodes of the podcast GPT Reviews
OpenAI's 'Strawberry' AI 🚀 // World's Fastest AI Inference ⚡ // Photo-realistic 3D Avatars 🎨
28/08/2024
Grok-2's Speed & Accuracy 🚀 // OpenAI's Transparency Push 🗳️ // LlamaDuo for Local LLMs 🔄
27/08/2024
Amazon Cloud Chief Spicy Takes 🚀 // Zuckerberg's AI Vision 📈 // Multimodal Models for Safety 🔒
23/08/2024
Grok-2 Beta Release 🚀 // Apple's $1,000 Home Robot 🏡 // ChemVLM Breakthrough in Chemistry 🔬
15/08/2024
Gemini Live AI Assistant 📱 // OpenAI’s Coding Benchmark ✅ // LongWriter’s 10K Word Generation ✍️
14/08/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.