Listen "Stable Diffusion"
Episode Synopsis
Diffusion models are generative models that learn to create data by reversing a process that gradually adds noise to a training sample. Stable Diffusion uses a U-Net architecture to map images to images, incorporating text prompts with CLIP embeddings and cross-attention, operating in a compressed latent space for efficiency. These models can be adapted for video generation by adding temporal layers or using 3D U-Nets. Conditioning the diffusion process on text or other inputs is also a key feature
More episodes of the podcast Large Language Model (LLM) Talk
Kimi K2
22/07/2025
Mixture-of-Recursions (MoR)
18/07/2025
MeanFlow
10/07/2025
Mamba
10/07/2025
LLM Alignment
14/06/2025
Why We Think
20/05/2025
Deep Research
12/05/2025
vLLM
04/05/2025
Qwen3: Thinking Deeper, Acting Faster
04/05/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.