DyNN-Offload: Efficient Memory for Dynamic Neural Networks

08/08/2025 21 min

Listen "DyNN-Offload: Efficient Memory for Dynamic Neural Networks"

Episode Synopsis

This document introduces DyNN-Offload, a novel memory management system designed to overcome the GPU memory limitations faced when training large dynamic neural networks (DyNNs). Unlike traditional methods that struggle with DyNNs' unpredictable memory access patterns, DyNN-Offload employs a learned approach using a lightweight "pilot model" to predict tensor access orders. By using an idiom-based representation of network operations, the pilot model efficiently guides the migration of tensors between CPU and GPU memory, enabling significantly larger DyNN training on a single GPU. The system demonstrates superior performance compared to existing solutions like unified virtual memory (UVM) and dynamic tensor rematerialization (DTR), while introducing minimal overhead. Its transparent integration with existing deep learning frameworks makes it a practical solution for advancing large-scale DyNN development.Source: 2024 - https://web.cs.ucla.edu/~harryxu/papers/ren-hpca24.pdf - Enabling Large Dynamic Neural Network Training
with Learning-based Memory Management

More episodes of the podcast AI: post transformers