Listen "EP12: Adversarial attacks and compression with Jack Morris "
Episode Synopsis
In this episode of the Information Bottleneck Podcast, we host Jack Morris, a PhD student at Cornell, to discuss adversarial examples (Jack created TextAttack, the first software package for LLM jailbreaking), the Platonic representation hypothesis, the implications of inversion techniques, and the role of compression in language models.Links:Jack's Website - https://jxmo.io/TextAttack - https://arxiv.org/abs/2005.05909How much do language models memorize? https://arxiv.org/abs/2505.24832DeepSeek OCR - https://www.arxiv.org/abs/2510.18234Chapters:00:00 Introduction and AI News Highlights04:53 The Importance of Fine-Tuning Models10:01 Challenges in Open Source AI Models14:34 The Future of Model Scaling and Sparsity19:39 Exploring Model Routing and User Experience24:34 Jack's Research: Text Attack and Adversarial Examples29:33 The Platonic Representation Hypothesis34:23 Implications of Inversion and Security in AI39:20 The Role of Compression in Language Models44:10 Future Directions in AI Research and Personalization
More episodes of the podcast The Information Bottleneck
EP20: Yann LeCun
15/12/2025
EP18: AI Robotics
01/12/2025
EP17: RL with Will Brown
24/11/2025
EP16: AI News and Papers
17/11/2025
EP14: AI News and Papers
10/11/2025
EP11: JEPA with Randall Balestriero
28/10/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.