Listen "Functionalization"
Episode Synopsis
Functionalization is the process by which we remove mutation from autograd graphs in PyTorch, leaving us with a purely functional graph that we can execute in the normal way. Why do we need to do functionalization? What makes it not so easy to do? How do we do it? And how does it compare to mutation removal that you might see in a compiler?Further reading:Section 3.1 of this paper on PyTorch AD https://openreview.net/pdf/25b8eee6c373d48b84e5e9c6e10e7cbbbce4ac73.pdf predates our implementation of inplace autograd but accurately reports the subtleties and correctly predicts the implementation strategy we ended up takingRFC to generalize the functionalization mechanism to be available to arbitrary backends https://github.com/pytorch/rfcs/pull/19Code that handles lazily updating views when the base is updated https://github.com/pytorch/pytorch/blob/e5e095cbe4dbc5a601f98e6134dcbd59c6342d7d/torch/csrc/autograd/variable.cpp#L556-L603
More episodes of the podcast PyTorch Developer Podcast
Compiler collectives
04/08/2024
TORCH_TRACE and tlparse
29/04/2024
Higher order operators
21/04/2024
Inductor - Post-grad FX passes
12/04/2024
CUDA graph trees
24/03/2024
Min-cut partitioner
17/03/2024
AOTInductor
02/03/2024
Tensor subclasses and PT2
24/02/2024
Compiled autograd
19/02/2024
PT2 extension points
05/02/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.