Listen "[MINI] Multi-armed Bandit Problems"
Episode Synopsis
The multi-armed bandit problem is named with reference to slot machines (one armed bandits). Given the chance to play from a pool of slot machines, all with unknown payout frequencies, how can you maximize your reward? If you knew in advance which machine was best, you would play exclusively that machine. Any strategy less than this will, on average, earn less payout, and the difference can be called the "regret". You can try each slot machine to learn about it, which we refer to as exploration. When you've spent enough time to be convinced you've identified the best machine, you can then double down and exploit that knowledge. But how do you best balance exploration and exploitation to minimize the regret of your play? This mini-episode explores a few examples including restaurant selection and A/B testing to discuss the nature of this problem. In the end we touch briefly on Thompson sampling as a solution.
More episodes of the podcast Data Skeptic
Video Recommendations in Industry
26/12/2025
Eye Tracking in Recommender Systems
18/12/2025
Cracking the Cold Start Problem
08/12/2025
Shilling Attacks on Recommender Systems
05/11/2025
Music Playlist Recommendations
29/10/2025
Bypassing the Popularity Bias
15/10/2025
Sustainable Recommender Systems for Tourism
09/10/2025
Interpretable Real Estate Recommendations
22/09/2025
ZARZA We are Zarza, the prestigious firm behind major projects in information technology.