Transformer Linearity, Face-Adapter Diffusion Models, Cross-Layer Attention Shrinks LLMs, Image Generation Breakthrough

23/05/2024 10 min Episodio 33
Transformer Linearity, Face-Adapter Diffusion Models, Cross-Layer Attention Shrinks LLMs, Image Generation Breakthrough

Listen "Transformer Linearity, Face-Adapter Diffusion Models, Cross-Layer Attention Shrinks LLMs, Image Generation Breakthrough"

Episode Synopsis


Your Transformer is Secretly Linear

Diffusion for World Modeling: Visual Details Matter in Atari

Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and
Attribute Control

Reducing Transformer Key-Value Cache Size with Cross-Layer Attention

OmniGlue: Generalizable Feature Matching with Foundation Model Guidance

Personalized Residuals for Concept-Driven Text-to-Image Generation

More episodes of the podcast AI Papers Podcast