Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details

30/05/2024 8 min
Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details

Listen "Orca 2: Enhancing Reasoning in Smaller Language Models - Technical Details"

Episode Synopsis



This story was originally published on HackerNoon at: https://hackernoon.com/orca-2-enhancing-reasoning-in-smaller-language-models-technical-details.
Orca 2 enhances small language models' reasoning by teaching diverse strategies for tasks, outperforming models up to 10x larger in complex benchmarks.
Check more stories related to programming at: https://hackernoon.com/c/programming.
You can also check exclusive content about #language-models, #orca-2, #reasoning-techniques, #machine-learning, #small-models, #imitation-learning, #ai-benchmarks, #model-training, and more.


This story was written by: @textmodels. Learn more about this writer by checking @textmodels's about page,
and for more stories, please visit hackernoon.com.



The Orca 2 dataset has four main sources:FLAN: Our main source of prompts for synthetic data generation is the FLAN-v2 Collection 33, which consists of five sub-collections. Following Orca 1 42, we consider tasks from only CoT, NiV2, T0, Flan 2021 and Dialogue. Some of the tasks are associated with an associated answer. For the Cautious Reasoning dataset we selected ~602 zero-shot user queries from the split of 1448 high quality tasks out of 1913.


More episodes of the podcast Programming Tech Brief By HackerNoon