Listen "Meta's Llama 3.1 vs. GPT-4o 🤯 // OpenAI's own AI chips 🧐 // SlowFast-LLaVA for Video LLMs 🎬"
Episode Synopsis
Meta's upcoming Llama 3.1 models could outperform the current state-of-the-art closed-source LLM model, OpenAI's GPT-4o.
OpenAI is planning to develop its own AI chip to optimize performance and potentially supercharge their progress towards AGI.
Apple's SlowFast-LLaVA is a new training-free video large language model that captures both detailed spatial semantics and long-range temporal context in video without exceeding the token budget of commonly used LLMs.
Google's Conditioned Language Policy (CLP) framework is a general framework that builds on techniques from multi-task training and parameter-efficient finetuning to develop steerable models that can trade-off multiple conflicting objectives at inference time.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:28 LLAMA 405B Performance Leaked
03:01 OpenAI Wants Its Own AI Chips
04:25 Towards more cooperative AI safety strategies
06:01 Fake sponsor
07:35 SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
09:17 AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
10:56 Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
12:46 Outro
OpenAI is planning to develop its own AI chip to optimize performance and potentially supercharge their progress towards AGI.
Apple's SlowFast-LLaVA is a new training-free video large language model that captures both detailed spatial semantics and long-range temporal context in video without exceeding the token budget of commonly used LLMs.
Google's Conditioned Language Policy (CLP) framework is a general framework that builds on techniques from multi-task training and parameter-efficient finetuning to develop steerable models that can trade-off multiple conflicting objectives at inference time.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:28 LLAMA 405B Performance Leaked
03:01 OpenAI Wants Its Own AI Chips
04:25 Towards more cooperative AI safety strategies
06:01 Fake sponsor
07:35 SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
09:17 AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
10:56 Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
12:46 Outro
More episodes of the podcast GPT Reviews
OpenAI's 'Strawberry' AI 🚀 // World's Fastest AI Inference ⚡ // Photo-realistic 3D Avatars 🎨
28/08/2024
Grok-2's Speed & Accuracy 🚀 // OpenAI's Transparency Push 🗳️ // LlamaDuo for Local LLMs 🔄
27/08/2024
Amazon Cloud Chief Spicy Takes 🚀 // Zuckerberg's AI Vision 📈 // Multimodal Models for Safety 🔒
23/08/2024
Grok-2 Beta Release 🚀 // Apple's $1,000 Home Robot 🏡 // ChemVLM Breakthrough in Chemistry 🔬
15/08/2024
Gemini Live AI Assistant 📱 // OpenAI’s Coding Benchmark ✅ // LongWriter’s 10K Word Generation ✍️
14/08/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.