Microsoft’s new Orca-Math AI outperforms models 10x larger

Key Points:

  • Microsoft introduces Orca-Math, a new variant of Mistral 7B model excelling in math word problems
  • Orca-Math outperforms models with 10 times more parameters at the GSM8K benchmark
  • The Orca team used synthetic data and the KTO method to enhance the model’s performance

Summary:

Microsoft Research has unveiled Orca-Math, a groundbreaking variant of Mistral’s Mistral 7B model focused on excelling in math word problems. Arindam Mitra, a senior researcher at Microsoft, shared that Orca-Math outperforms most 7-70 billion parameter-sized AI models at the GSM8K benchmark, designed by OpenAI for middle-school-level math.

 

The impressive aspect of Orca-Math is its efficacy despite being a 7-billion parameter model, competing with larger models from industry giants like OpenAI and Google. Mitra highlighted the team’s innovative approach, involving specialized AI agents and a vast dataset of word problems to refine and enhance the model’s performance.

 

Utilizing the “Kahneman-Tversky Optimization” (KTO) method, developed by Contextual AI, in conjunction with supervised fine-tuning, the Orca team optimized the model’s accuracy in solving math problems. They also shared a synthetic dataset of 200,000 math problems on Hugging Face under a permissive MIT license for public use, fostering innovation and exploration in the AI community.

 

Notably, this achievement builds on Microsoft’s previous releases in the Orca family, with Orca-Math showcasing the continuous evolution and advancement of smaller, yet highly capable language models. The team’s innovative methodology and commitment to furthering AI capabilities offer a glimpse into the future of AI research and development.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon