Qwen/Qwen2.5-Math-7B-Instruct
Qwen/Qwen2.5-Math-7B-Instruct is a 7.6 billion parameter instruction-tuned causal language model developed by Qwen, specifically optimized for solving mathematical problems. This model supports both Chain-of-Thought (CoT) and Tool-integrated Reasoning (TIR) for English and Chinese math tasks. It is designed to enhance computational accuracy and handle complex mathematical reasoning, achieving strong performance on benchmarks like MATH.
Loading preview...
Overview
Qwen2.5-Math-7B-Instruct is part of the Qwen2.5-Math series, an upgraded collection of mathematical large language models developed by Qwen. This 7.6 billion parameter instruction-tuned model is specifically designed for solving math problems in both English and Chinese.
Key Capabilities
- Mathematical Reasoning: Excels at solving complex mathematical problems.
- Multilingual Support: Capable of handling math problems in both English and Chinese.
- Reasoning Methods: Supports two primary reasoning approaches:
- Chain-of-Thought (CoT): For step-by-step natural language reasoning.
- Tool-integrated Reasoning (TIR): Integrates external tools for precise computation, symbolic manipulation, and algorithmic tasks, significantly improving accuracy for complex problems.
- Performance: Achieves 85.3 on the MATH benchmark using TIR, demonstrating substantial improvements over its predecessor, Qwen2-Math.
When to Use
This model is highly recommended for applications requiring robust mathematical problem-solving capabilities, particularly those involving detailed reasoning or precise calculations. It is explicitly noted that this model series is primarily for mathematical tasks and not recommended for general-purpose applications.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.