ChatGPT's Memory Feature 🤔 // Nvidia Founder Dismisses AI Investment Proposal 💸 // M2-BERT for Long-Context Retrieval 📈

14/02/2024 15 min

Listen "ChatGPT's Memory Feature 🤔 // Nvidia Founder Dismisses AI Investment Proposal 💸 // M2-BERT for Long-Context Retrieval 📈"

Episode Synopsis

From ChatGPT's memory feature and its potential impact on privacy and efficiency, to Nvidia founder Jensen Huang's dismissal of OpenAI's $7 trillion AI investment proposal. The episode also delves into V-STaR's approach to improving self-improvement in large language models, and M2-BERT's ability to handle long-context retrieval and outperform competitive baselines.
Contact:  [email protected]
Timestamps:
00:34 Introduction
01:41 Memory and new controls for ChatGPT
03:20 Nvidia Founder Jensen Huang Dismisses $7 Trillion AI Investment Figure Floated by OpenAI's Sam Altman
04:57 Stable Cascade
06:00 Fake sponsor
07:53 V-STaR: Training Verifiers for Self-Taught Reasoners
09:20 Benchmarking and Building Long-Context Retrieval Models with LoCo and M2-BERT
11:35 ODIN: Disentangled Reward Mitigates Hacking in RLHF
13:50 Outro

More episodes of the podcast GPT Reviews