Episode Synopsis "2020 AI Research Highlights: Handling distributional shifts (part 2)"
Distributional shifts are common in real-world problems, e.g. often simulation data is used to train in data-limited domains. Standard neural networks cannot handle such large domain shifts. They also lack uncertainty quantification: they tend to be overconfident when they make errors. You can read previous posts for other research highlights: generalizable AI (part 1), optimization … Continue reading 2020 AI Research Highlights: Handling distributional shifts (part 2) → The post 2020 AI Research Highlights: Handling distributional shifts (part 2) first appeared on Anima on AI.
Listen "2020 AI Research Highlights: Handling distributional shifts (part 2)"
More episodes of the podcast Anima on AI
- Top-10 Things in 2022
- Top-10 AI Research Highlights of 2021
- 2020 AI Research Highlights: Learning Frameworks (part 7)
- 2020 AI Research Highlights: Learning and Control (part 6)
- 2020 AI Research Highlights: Controllable Generation (part 5)
- 2020 AI Research Highlights: AI4Science (part 4)
- 2020 AI Research Highlights: Optimization for Deep Learning (part 3)
- 2020 AI Research Highlights: Handling distributional shifts (part 2)
- 2020 AI Research Highlights: Generalizable AI (part 1)
- My heartfelt apology