Sveriges mest populära poddar
The Information Bottleneck

EP12: Adversarial attacks and compression with Jack Morris

58 min3 november 2025

In this episode of the Information Bottleneck Podcast, we host Jack Morris, a PhD student at Cornell, to discuss adversarial examples (Jack created TextAttack, the first software package for LLM jailbreaking), the Platonic representation hypothesis, the implications of inversion techniques, and the role of compression in language models.

Links:

Jack's Website - https://jxmo.io/

TextAttack - https://arxiv.org/abs/2005.05909

How much do language models memorize? https://arxiv.org/abs/2505.24832

DeepSeek OCR - https://www.arxiv.org/abs/2510.18234

Chapters:

00:00 Introduction and AI News Highlights

04:53 The Importance of Fine-Tuning Models

10:01 Challenges in Open Source AI Models

14:34 The Future of Model Scaling and Sparsity

19:39 Exploring Model Routing and User Experience

24:34 Jack's Research: Text Attack and Adversarial Examples

29:33 The Platonic Representation Hypothesis

34:23 Implications of Inversion and Security in AI

39:20 The Role of Compression in Language Models

44:10 Future Directions in AI Research and Personalization

Fler avsnitt av The Information Bottleneck

Visa alla avsnitt av The Information Bottleneck

The Information Bottleneck med Ravid Shwartz-Ziv & Allen Roush finns tillgänglig på flera plattformar. Informationen på denna sida kommer från offentliga podd-flöden.