Listen "OpenAI's Nuclear Threat Team ☢️ // Demystifying CLIP Data 🧩 // Trillion-Parameter Model Compression 📏"
Episode Synopsis
OpenAI's new team tackling nuclear threats. The demystification of CLIP data from Meta Research and the Data Provenance Explorer, a tool that provides transparency and accountability in AI datasets. The team also delves into research papers on detecting pretraining data, quantized Transformers, and compression of trillion-parameter models.
Contact: [email protected]
Timestamps:
00:34 Introduction
02:25 OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats
04:34 Demistifying CLIP data from Meta Research
06:27 Data Provenance Explorer
08:33 Fake sponsor
11:11 Detecting Pretraining Data from Large Language Models
12:44 LLM-FP4: 4-Bit Floating-Point Quantized Transformers
14:28 QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
16:34 Outro
Contact: [email protected]
Timestamps:
00:34 Introduction
02:25 OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats
04:34 Demistifying CLIP data from Meta Research
06:27 Data Provenance Explorer
08:33 Fake sponsor
11:11 Detecting Pretraining Data from Large Language Models
12:44 LLM-FP4: 4-Bit Floating-Point Quantized Transformers
14:28 QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models
16:34 Outro
More episodes of the podcast GPT Reviews
OpenAI's 'Strawberry' AI 🚀 // World's Fastest AI Inference ⚡ // Photo-realistic 3D Avatars 🎨
28/08/2024
Grok-2's Speed & Accuracy 🚀 // OpenAI's Transparency Push 🗳️ // LlamaDuo for Local LLMs 🔄
27/08/2024
Amazon Cloud Chief Spicy Takes 🚀 // Zuckerberg's AI Vision 📈 // Multimodal Models for Safety 🔒
23/08/2024
Grok-2 Beta Release 🚀 // Apple's $1,000 Home Robot 🏡 // ChemVLM Breakthrough in Chemistry 🔬
15/08/2024
Gemini Live AI Assistant 📱 // OpenAI’s Coding Benchmark ✅ // LongWriter’s 10K Word Generation ✍️
14/08/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.