Stellar inference speed via AutoNAS

07/09/2021 42 min Episodio 148
Stellar inference speed via AutoNAS

Listen "Stellar inference speed via AutoNAS"

Episode Synopsis


Yonatan Geifman of Deci makes Daniel and Chris buckle up, and takes them on a tour of the ideas behind his amazing new inference platform. It enables AI developers to build, optimize, and deploy blazing-fast deep learning models on any hardware. Don’t blink or you’ll miss it!Join the discussionChangelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!Sponsors:RudderStack – Smart customer data pipeline made for developers. RudderStack is the smart customer data pipeline. Connect your whole customer data stack. Warehouse-first, open source Segment alternative. SignalWire – Build what’s next in communications with video, voice, and messaging APIs powered by elastic cloud infrastructure. Try it today at signalwire.com and use code SHIPIT for $25 in developer credit. Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.comFeaturing:Yonatan Geifman – Website, GitHub, XChris Benson – Website, GitHub, LinkedIn, XDaniel Whitenack – Website, GitHub, XShow Notes:DeciAn Introduction to the Inference Stack and Inference Acceleration TechniquesDeci and Intel Collaborate to Optimize Deep Learning Inference on Intel’s CPUsDeciNets: A New Efficient Frontier for Computer Vision ModelsWhite paperSomething missing or broken? PRs welcome!