Alibaba launches Marco-o1 AI model with 6% boost in math problem-solving accuracy
Alibaba has introduced Marco-o1, a new large language model that improves mathematical problem-solving by 6%. This model focuses on open-ended problems, unlike traditional AI that excels in structured tasks. Marco-o1 uses advanced techniques like Chain-of-Thought learning and Monte Carlo Tree Search to enhance reasoning and translation. It is built on the Qwen2-7B-Instruct architecture and trained with a mix of open-source and proprietary data. The model is now available for free on platforms like GitHub and Hugging Face. This release follows similar advancements from other companies and aims to compete with OpenAI’s o1 model in reasoning capabilities.