Listen "Microsoft Reducing AI Compute Requirements with Small Language Models"
Episode Synopsis
“Microsoft is making a bet that we’re not going to need a single AI, we’re going to need many different AIs” Sebastien Bubeck, Microsoft’s vice president of generative-AI research, tells Bloomberg senior technology analyst Anurag Rana. In this Tech Disruptors episode, the two examine the differences between a large language model like ChatGPT-4o and a small language model such as Microsoft’s Phi-3 family. Bubeck and Rana account for various use cases of the models across various industries and workflows. The two also compare the costs and differences in compute/GPU requirements between SLMs and LLMs.
More episodes of the podcast Tech Disruptors
AT&T Ventures Bets on Future Telecom Trends
29/12/2025
Amazon VP on Rufus and the Future of Search
17/12/2025
Workdays CTO on its Open-Platform Bet
09/12/2025
You.com CEO on Reinventing Search for AI Era
25/11/2025
Waymo Sees Inflection in the Rollout of AVs
13/11/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.