650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy

03/02/2023 7 min
650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy

Listen "650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy"

Episode Synopsis

SparseGPT is a noteworthy one-shot pruning technique that can halve the size of large language models like GPT-3 without adversely affecting accuracy. In this episode, Jon Krohn provides an overview of this development and explains its commercial and environmental implications.
Additional materials: www.superdatascience.com/650
Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.

More episodes of the podcast Super Data Science: ML & AI Podcast with Jon Krohn