Listen "Backend extensibility"
Episode Synopsis
What's the current state of backend extensibility? How did PyTorch evolve from being a CPU and CUDA only framework to also support AMD ROCm and XLA? What are some problems with adding an out-of-tree backend, and what's some work to make it better?Further reading:Script for HIPifying PyTorch's source when enabling ROCm https://github.com/pytorch/pytorch/blob/master/tools/amd_build/build_amd.pyPyTorch/XLA https://github.com/pytorch/xla/Brian Hirsh's spec on what out-of-tree backend codegen looks like https://github.com/pytorch/xla/issues/2871
More episodes of the podcast PyTorch Developer Podcast
Compiler collectives
04/08/2024
TORCH_TRACE and tlparse
29/04/2024
Higher order operators
21/04/2024
Inductor - Post-grad FX passes
12/04/2024
CUDA graph trees
24/03/2024
Min-cut partitioner
17/03/2024
AOTInductor
02/03/2024
Tensor subclasses and PT2
24/02/2024
Compiled autograd
19/02/2024
PT2 extension points
05/02/2024
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.