Listen "ChatGPT Plugins Prompt Injections 💉 // More AI X-Risk 💀 // Unfair Evaluation of LLMs 👎"
Episode Synopsis
Risks associated with AI technology, including prompt injection and the potential for AI to cause extinction, language models, with a new optimizer called MeZO proposed for fine-tuning large models and a paper investigating whether language models can identify their own "hallucinations." Additionally, a bias in the evaluation paradigm of using large language models to score the quality of responses generated by other models is uncovered, and two calibration strategies are proposed to address this bias.
Contact: [email protected]
Timestamps:
00:34 Introduction
01:33 ChatGPT Plugins aren’t safe, Prompt Injections
02:48 Statement on AI Risk
04:40 AI is Eating The World
06:04 Fake sponsor
08:38 Fine-Tuning Language Models with Just Forward Passes
10:10 Do Language Models Know When They're Hallucinating References?
11:41 Large Language Models are not Fair Evaluators
13:50 Outro
Contact: [email protected]
Timestamps:
00:34 Introduction
01:33 ChatGPT Plugins aren’t safe, Prompt Injections
02:48 Statement on AI Risk
04:40 AI is Eating The World
06:04 Fake sponsor
08:38 Fine-Tuning Language Models with Just Forward Passes
10:10 Do Language Models Know When They're Hallucinating References?
11:41 Large Language Models are not Fair Evaluators
13:50 Outro
More episodes of the podcast GPT Reviews
OpenAI's 'Strawberry' AI 🚀 // World's Fastest AI Inference ⚡ // Photo-realistic 3D Avatars 🎨
28/08/2024
Grok-2's Speed & Accuracy 🚀 // OpenAI's Transparency Push 🗳️ // LlamaDuo for Local LLMs 🔄
27/08/2024
Amazon Cloud Chief Spicy Takes 🚀 // Zuckerberg's AI Vision 📈 // Multimodal Models for Safety 🔒
23/08/2024
Grok-2 Beta Release 🚀 // Apple's $1,000 Home Robot 🏡 // ChemVLM Breakthrough in Chemistry 🔬
15/08/2024
Gemini Live AI Assistant 📱 // OpenAI’s Coding Benchmark ✅ // LongWriter’s 10K Word Generation ✍️
14/08/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.