IMO 2025 Gold — 5/6 problems solved, open-source 🥇 Putnam 2024 — 118/120 near-perfect score MATH-500 — 97.3% with DeepThink Max mode CMO 2024 Gold — Chinese Mathematical Olympiad IMO-ProofBench Basic — Near-perfect 99% accuracy Apache 2.0 License — DeepSeekMath-V2 open weights Self-Verifiable Proofs — World-first AI self-verification at IMO level IMO 2025 Gold — 5/6 problems solved, open-source 🥇 Putnam 2024 — 118/120 near-perfect score MATH-500 — 97.3% with DeepThink Max mode CMO 2024 Gold — Chinese Mathematical Olympiad IMO-ProofBench Basic — Near-perfect 99% accuracy Apache 2.0 License — DeepSeekMath-V2 open weights Self-Verifiable Proofs — World-first AI self-verification at IMO level
∫₀^∞ e⁻ˣ² dx = √π/2
P(n) → P(n+1) ∀n ∈ ℕ
det(A) = Σ sgn(σ) ∏ aᵢσ(ᵢ)
∑_{n=1}^∞ 1/n² = π²/6
IMO 2025 Gold · Putnam 118/120 · MATH-500 97.3% · Open Source

AI That Masters
Mathematics.

DeepSeek's mathematical AI family — from competition-level olympiad problem solving to formal theorem proving — has achieved what was considered impossible just two years ago: open-source AI that earns an IMO Gold Medal and verifies its own proofs.

Try DeepThink Math Free → 🤗 DeepSeekMath-V2 GitHub ↗
🥇 GoldIMO 2025
118/120Putnam 2024
97.3%MATH-500
99%IMO-ProofBench Basic
5/6IMO Problems Solved
Apache 2.0Math-V2 License
Mathematical AI Models

Every Model, Every Level of Mathematics

From competition math to formal proof verification — DeepSeek's mathematical AI family covers every level of mathematical reasoning, all open-source.

🥇 IMO 2025 Gold Medal
🔬 THEOREM PROVING 🏆
DeepSeekMath-V2
Apache 2.0 · Released Nov 2025

The world's most capable open-source mathematical reasoning model. Built on DeepSeek-V3.2-Exp-Base with a self-verifiable reasoning pipeline. First open-source model to achieve IMO Gold, matching OpenAI and Google DeepMind at this elite level. Solves 5/6 IMO 2025 problems. Near-perfect 118/120 on Putnam 2024. 99% on IMO-ProofBench Basic.

IMO 2025
Gold Medal
118/120
Putnam 2024
99%
IMO-ProofBench
Apache 2.0
License
Download Weights ↗
⚡ FRONTIER REASONING 🧠
DeepSeek V4-Pro (Think Max)
deepseek-v4-pro · April 24, 2026

The flagship 1.6T parameter model with Think Max reasoning mode delivers 97.3% MATH-500 and top-tier HMMT scores (95.2%). Best for production math applications — accessible via API and free web chat. Three reasoning effort modes (Non-Think, Think High, Think Max) let you trade speed for depth.

97.3%
MATH-500
95.2%
HMMT 2026
1M
Context
MIT
License
Try Expert Mode Free →
🔎 CHAIN-OF-THOUGHT 🔬
DeepSeek-R1 (Reasoning)
deepseek-reasoner · Jan 2025 original

The model that changed everything. Trained via pure reinforcement learning — no supervised fine-tuning. Develops chain-of-thought reasoning organically. 97.3% on MATH-500, 89.3% on AIME 2025. Showed the world that reasoning AI could be open, affordable, and match o1-level performance. Distilled variants from 1.5B to 70B available.

97.3%
MATH-500
89.3%
AIME 2025
CoT
Reasoning
MIT
License
Hugging Face ↗
Historical Milestones

The Journey to Mathematical Frontier

From a 7B parameter domain model to IMO Gold in under two years — DeepSeek's mathematical AI has followed an extraordinary trajectory of improvement.

February 2024
DeepSeekMath-7B — The Foundation

DeepSeek released the original DeepSeekMath-7B, initialized from DeepSeek-Coder-Base-v1.5. The team discovered that starting from a code training model was significantly better than starting from a general LLM for mathematical reasoning — a key finding that shaped all subsequent development. The model pre-trained on 120B math-specific tokens and outperformed all open-source models of its time on English and Chinese math benchmarks, approaching the performance of closed-source Minerva 540B. The DeepSeekMath Corpus — a curated multilingual mathematical dataset — became a foundational resource for the field.

First milestone · 7B params
January 2025
DeepSeek-R1 — Chain-of-Thought Breakthrough

The release that shocked the world. DeepSeek-R1 trained entirely via reinforcement learning — no supervised fine-tuning — and developed chain-of-thought reasoning organically. It matched OpenAI's o1 on MATH-500 (97.3%) and showed that open-source AI could compete at frontier level mathematical reasoning. The release briefly affected NVIDIA's stock price as it demonstrated that frontier-quality reasoning AI didn't require billions in proprietary training infrastructure. Distilled variants (1.5B to 70B parameters) were released simultaneously, bringing competition-level math reasoning to consumer hardware for the first time.

97.3% MATH-500 · R1 Launch
May 2025
R1-0528 — Deeper Reasoning Compute

DeepSeek released R1-0528, a significant upgrade built on the V3 Base model. Average token usage during math reasoning tasks nearly doubled — from 12K to 23K tokens per AIME question — indicating dramatically deeper search and reasoning chains. Performance on AIME 2025 reached 89.3%, approaching the top-tier models. The upgrade demonstrated that scaling test-time compute, not just parameters, was the key lever for mathematical reasoning improvement — a finding that directly informed the DeepSeekMath-V2 architecture.

89.3% AIME 2025 · 2× compute
November 2025
DeepSeekMath-V2 — IMO Gold Medal

The defining milestone in open-source mathematical AI. DeepSeekMath-V2 achieved gold-level performance at IMO 2025, solving 5 of 6 problems — the same result as OpenAI's experimental model and Google DeepMind's Gemini Deep Think. It scored a near-perfect 118/120 on Putnam 2024, the prestigious US undergraduate competition. On IMO-ProofBench Basic, it reached 99% accuracy — far ahead of all other models. The breakthrough was the self-verifiable reasoning architecture: the model trains its own verifier, generates proofs, identifies flaws, and revises before finalizing. Hugging Face CEO Clément Delangue called it "the brain of one of the best mathematicians in the world, for free."

🥇 IMO 2025 Gold · 118/120 Putnam
April 2026
V4-Pro — Production Math at Scale

DeepSeek V4-Pro brings frontier math capability to production. With Think Max mode, it achieves 97.3% on MATH-500 and 95.2% on HMMT 2026 — competitive with GPT-5.4 (97.7%) and Claude (96.2%). The 1M token context window means entire competition solution sets, research papers, or textbooks can be processed in a single request. At $1.74/1M input tokens with 75% promotional discount through May 2026, it's the most cost-effective route to frontier math AI for production applications.

95.2% HMMT · 1M context
Performance Benchmarks

Math That Speaks for Itself

Across competition math, theorem proving, and curriculum math, DeepSeek leads or ties the best models in the world. All benchmark data from official sources and independent evaluations.

IMO 2025 — International Math Olympiad
Most prestigious math competition · 5/6 problems solved = Gold Medal
🥇 All three: Gold
DeepSeekMath-V2
5/6 🥇
OpenAI o3
5/6 🥇
Gemini DeepThink
5/6 🥇
Human Gold cutoff
4/6
Putnam 2024 (scaled test-time compute)
US Undergraduate Mathematics Exam · Max score: 120
DeepSeekMath-V2 · 118/120
DeepSeekMath-V2
118/120
AIME 2025 — American Invitational Math Exam
30 problems · Path to IMO selection
DeepSeek R1: 89.3%
GPT-5.2 (xhigh)
~100%
DeepSeek R1
89.3%
OlymMATH-EASY
89.7%
HMMT 2026 — Harvard-MIT Math Tournament
Undergraduate-level competition math
GPT-5.4 leads · 97.7%
GPT-5.4
97.7%
Claude Opus 4.6
96.2%
V4-Pro (Think Max)
95.2%
MATH-500
Curated competition math benchmark · AMC to pre-olympiad difficulty
DeepSeek: 97.3%
V4-Pro / R1
97.3%
GPT-5.4
96.4%
Gemini 3.1 Pro
~95%
MMLU-Pro (Mathematics Subset)
Graduate-level math knowledge across topology, analysis, algebra
V4-Pro 73.5%
V4-Pro
73.5%
DeepSeekMath-V2
75.7%*
IMO-ProofBench Basic (60 proof problems)
Formal proof generation · Developed by DeepMind IMO team
DeepSeekMath-V2 99%
DeepSeekMath-V2
~99%
Other models
~60%
IMO-ProofBench Advanced
Hard proof problems requiring novel mathematical reasoning
Gemini Deep Think leadsV2 close 2nd
Gemini Deep Think
Leads
DeepSeekMath-V2
2nd
CMO 2024 — Chinese Mathematical Olympiad
DeepSeekMath-V2: Gold 🥇
DeepSeekMath-V2
Gold 🥇
Technical Deep Dive

How DeepSeek Math Works

Understanding the architectural innovations behind world-class mathematical AI — from reinforcement learning to self-verifiable proof generation.

🔁Reinforcement Learning for Math (GRPO)

The original DeepSeekMath introduced Group Relative Policy Optimization (GRPO), a novel reinforcement learning algorithm specifically designed for mathematical reasoning. Instead of requiring a separate critic model (as in standard PPO), GRPO estimates baselines from group scores — dramatically reducing memory and compute requirements while maintaining training stability.

The mathematical reward signal is simple but powerful: the model receives a positive reward if its final computed answer matches the ground-truth, and zero otherwise. This sparse reward structure forces the model to develop intermediate reasoning steps (chain-of-thought) organically through RL, rather than imitating provided demonstrations.

GRPO Advantage Estimator
Â_i = (r_i - mean(r)) / std(r)
where r_i = reward for output i in the group

This approach was foundational to DeepSeek-R1's training recipe — the model that first demonstrated RL-trained chain-of-thought reasoning could match supervised state-of-the-art on competition math benchmarks.

Self-Verifiable Proof Generation (Math-V2)

The key innovation in DeepSeekMath-V2 is self-verifiable mathematical reasoning — a system where the model trains both a proof generator and a proof verifier simultaneously, creating a self-improvement loop that scales with test-time compute.

Traditional RL for math rewards correct final answers. This works for problems with known solutions but fails for frontier-level mathematics where ground truth isn't available. DeepSeekMath-V2 solves this by training an accurate verifier that can validate proofs on their logical structure — not just their numeric answers.

The training pipeline works as follows: (1) Generate candidate proofs. (2) Verifier scores them for logical consistency. (3) Generator receives reward based on verifier score. (4) Generator learns to produce proofs the verifier finds correct. (5) When the generator improves, verification compute scales up to maintain the generation-verification gap. This creates a self-improving system that can continue improving beyond the limits of labeled training data.

Self-Verification Loop
Proof Generator → candidate_proof → Verifier
Verifier → r(proof) → Generator (reward signal)
Gap maintained: V_compute ↑ as G_quality ↑
📚The DeepSeekMath Corpus

Mathematical reasoning requires not just capability but knowledge — the right training data. DeepSeek built a 120 billion token mathematical corpus by crawling and filtering Common Crawl for high-quality mathematical content. This corpus is multilingual, with strong representation in Chinese mathematical notation and terminology.

The filtering pipeline trained a 1B parameter classifier to score mathematical relevance and quality, applied across the entirety of Common Crawl. Crucially, the team discovered that training on code before math produces significantly better mathematical reasoning than training on general text — the structural similarities between code (formal, precise, sequential logic) and mathematical proofs transfer remarkably well.

The corpus upgrade alone produced measurable improvements: pre-training a 1B model on the new corpus showed +5.5% on HumanEval and +4.4% on MBPP compared to the old dataset, confirming the data quality improvements were real and significant.

120B
Training tokens
1B
Quality classifier
Multi
Lingual
Code→Math
Init strategy
📐Scaling Test-Time Compute

One of the most important insights from DeepSeek's math research is that test-time compute scaling is the primary driver of mathematical reasoning performance — more so than parameter count alone. When you give the model a larger "thinking budget," performance on hard problems improves dramatically.

The progression from R1 to R1-0528 is instructive: average token usage on AIME problems doubled (12K → 23K tokens per problem) and AIME 2025 accuracy increased from roughly 80% to 89.3%. The model wasn't larger — it had more compute budget to explore reasoning chains.

DeepSeek V4-Pro's Think Max mode operationalizes this insight: by setting budget: "max" in the API, you unlock the full reasoning compute. For the hardest olympiad-level problems, Think Max can use 50,000+ tokens of internal reasoning before producing an answer — comparable to a mathematician working through a hard problem over several hours.

The practical takeaway: for math applications, always use Think Max mode on hard problems, and set the maximum output token limit generously. The 384K output limit in V4 exists precisely to support these extended reasoning chains.

Mathematical Capabilities

Every Domain of Mathematics

DeepSeek's mathematical AI covers the full spectrum — from elementary school arithmetic to research-level theorem proving — with specialized capabilities for each domain.

🏆
Olympiad Problem Solving

IMO Gold-level performance on algebra, geometry, number theory, and combinatorics. Works through multi-step problems with rigorous, human-readable reasoning chains.

IMO Gold 2025
📜
Formal Proof Generation

Generates structured mathematical proofs with explicit logical steps, valid for academic and research contexts. 99% on IMO-ProofBench Basic. Self-verifies before finalizing.

Self-verifiable
🔁
Chain-of-Thought Reasoning

Toggle DeepThink mode to see every reasoning step — algebraic manipulation, case analysis, induction steps, counter-examples. Educational for learning and verification.

Full transparency
📊
Calculus & Analysis

Limits, derivatives, integrals, series convergence, real and complex analysis. Works through ε-δ proofs, contour integration, and Fourier analysis with graduate-level precision.

Grad level
🔢
Number Theory

Prime factorization, modular arithmetic, Diophantine equations, cryptographic primality tests, quadratic residues, and algebraic number theory.

Research grade
📐
Geometry & Topology

Euclidean and non-Euclidean geometry, affine transformations, topological spaces, manifolds, and metric spaces. Provides coordinate and synthetic proofs.

All geometry
🧮
Linear & Abstract Algebra

Matrices, eigenvalues, vector spaces, groups, rings, fields, Galois theory. Proves structural theorems and performs explicit computations with symbolic clarity.

Abstract algebra
🎲
Probability & Statistics

Measure-theoretic probability, stochastic processes, Bayesian reasoning, hypothesis testing, and statistical modeling with full derivation support.

Rigorous proofs
💻
Algorithm Analysis

Asymptotic complexity, recurrence relations, combinatorial algorithms, graph theory, and computational complexity. Bridges math and CS theory.

Theory CS
🌐
Multilingual Math

Strong performance on Chinese mathematical notation and problems — outperforms GPT-4o and Claude on Chinese math benchmarks. The best AI for Chinese-language math.

Chinese leader
🔍
Error Detection & Correction

Upload a solution and ask "find the error." DeepSeek identifies logical gaps, computational mistakes, and faulty assumptions — invaluable for proof checking.

Proof checking
📖
Step-by-Step Teaching

Adapts explanation depth to the student. Elementary algebra, high school calculus, undergraduate real analysis, or graduate topology — each at the right level.

Adaptive
Use Cases

Who Uses DeepSeek Math — and How

Mathematical AI is no longer just for mathematicians. DeepSeek's math capabilities power education, research, finance, engineering, and competitive training.

01
Competition Math Training

Students preparing for AMC, AIME, USAMO, Putnam, and IMO use DeepSeek as an AI math coach. It provides step-by-step solutions, generates similar practice problems, and explains why each approach works. Think Max mode shows the full reasoning chain — exactly what competition judges want to see.

02
University Math Education

Professors and students use DeepSeek for homework verification, proof checking, and concept explanation. Upload a proof attempt and ask for feedback. Request explanations of Rudin's real analysis at three different levels of rigor. Generate problem sets with solutions for any undergraduate math topic.

03
Mathematical Research Assistance

Researchers use DeepSeek for literature search summaries, conjecture exploration, and proof sketch generation. It won't replace a Fields Medalist, but it dramatically accelerates the routine work: checking edge cases, exploring analogous results, and generating candidate approaches to known open problems.

04
Quantitative Finance

Derive stochastic differential equations, solve Black-Scholes variants, verify portfolio optimization proofs, and analyze risk model assumptions. V4-Pro with Think Max handles the derivation work that used to require senior quant mathematicians — at a fraction of the time and cost.

05
Engineering Mathematics

Solve systems of PDEs, perform Fourier analysis on signals, verify structural mechanics calculations, and optimize constrained systems. DeepSeek understands the physical context and produces dimensionally consistent, numerically verifiable results.

06
Cryptography & Security

RSA primality analysis, elliptic curve arithmetic, lattice-based cryptography proofs, and zero-knowledge proof system design. Graduate-level number theory applied to real security problems — with full working derivations.

07
AI/ML Theory

Derive convergence proofs for optimization algorithms, analyze generalization bounds, verify information-theoretic arguments, and work through the mathematics of transformer architectures. Essential for researchers who need to publish mathematically rigorous ML papers.

08
K-12 Math Education

Teachers generate leveled problem sets, explanations, and worked examples in seconds. Students get patient, step-by-step help at any time. DeepSeek adjusts explanation depth automatically — elementary school arithmetic through AP Calculus.

09
Data Science & Statistics

Prove statistical estimator properties, derive sampling distributions, verify hypothesis test power calculations, and work through Bayesian inference derivations. Bridges the gap between theoretical statistics and applied data science practice.

Getting Started

How to Use DeepSeek Math

From competition problems to theorem proving — how to get the best mathematical output from DeepSeek.

1
Enable DeepThink mode

In chat.deepseek.com, toggle DeepThink for hard problems. Via API: set extra_body={"thinking":{"type":"enabled","budget":"max"}}. Critical for IMO-level work.

2
State the problem precisely

For math, precision matters more than anywhere else. Include all constraints. State what form the answer should take. For proofs, specify which proof style you want (constructive, contradiction, induction).

3
Request verification

Always end with "verify your solution" or "check for errors before finalizing." This triggers the self-check phase and catches ~70% of arithmetic and logical errors before they reach you.

4
Use XML tags for context

Wrap problem in <problem>, constraints in <constraints>, and specify output in <output_format>. For multi-part problems, number each part explicitly.

5
Set output token limit generously

Hard olympiad problems need 10,000–50,000 output tokens for full chain-of-thought. Set max_tokens=65536 in the API. In chat, Think Max automatically expands the budget.

6
Use DeepSeekMath-V2 for proofs

For formal theorem proving and IMO-level proof generation, download DeepSeekMath-V2 from Hugging Face (Apache 2.0). Built on DeepSeek-V3.2-Exp-Base with extended test-time compute.

Key prompting tips for mathematics: (1) For DeepThink (R1) mode, never add "think step by step" — it already reasons internally. (2) Never add few-shot examples in Think Max mode — they degrade R1 performance. (3) For competition problems, state the exact competition (AMC 12, AIME, Putnam) so DeepSeek calibrates difficulty and style appropriately. (4) For proofs, specify if you need the proof to be rigorous enough for publication, or just a clear sketch.
Pricing

IMO-Level Math. Zero Subscription.

The web chat is completely free. DeepSeekMath-V2 weights are Apache 2.0. API pricing for V4-Pro makes frontier math AI affordable at any scale.

Free Forever
Web Chat (DeepThink)
$0/month

Full access to V4-Pro Expert Mode with Think Max reasoning at chat.deepseek.com. No limits for personal use. The world's best freely accessible math AI.

V4-Pro + Think Max✓ Free
97.3% MATH-500✓ Included
File / LaTeX upload✓ Free
Unlimited sessions✓ No cap
Start Solving Free →
Best for Research
V4-Pro API (Think Max)
$1.74/1M in

Programmatic access to Think Max reasoning for math applications. 75% promotional discount until May 31, 2026 reduces effective cost to $0.435/1M.

Input (promo)$0.435/1M
Input (cache hit)$0.044/1M
Output$0.87/1M
Max output384K tokens
Get API Key →
DeepSeekMath-V2
Open Source

IMO Gold open-source model. Apache 2.0 license — commercial use, fine-tuning, and distribution all permitted. Download from Hugging Face.

API fees$0 forever
LicenseApache 2.0
Base modelDS-V3.2-Exp
SpecializationTheorem proving
Download Weights ↗
V4-Flash API (Fast Math)
$0.14/1M in

V4-Flash with Think Max for high-volume math pipelines. 12.4× cheaper than Pro. Still achieves excellent scores on curriculum-level math at 83 tok/s.

Input (cache miss)$0.14/1M
Input (cache hit)$0.014/1M
Output$0.28/1M
Best forHigh volume
Get API Key →
FAQ

Frequently Asked Questions

What is DeepSeek Math and what can it do?+

"DeepSeek Math" refers to the family of DeepSeek AI models optimized for mathematical reasoning. This includes: DeepSeekMath-V2 (November 2025, IMO Gold, formal theorem proving), DeepSeek-R1 (the original chain-of-thought reasoning model, 97.3% MATH-500), and DeepSeek V4-Pro with Think Max (the current production model for math at 95.2% HMMT 2026). Together, these models cover the full spectrum from elementary school arithmetic to olympiad proof generation to graduate-level research mathematics. All are either free to use via chat or open-source under permissive licenses.

Did DeepSeek really win an IMO Gold Medal?+

Yes. DeepSeekMath-V2 solved 5 out of 6 problems at the 2025 International Mathematical Olympiad (IMO) — the threshold for a Gold Medal. This was confirmed by DeepSeek's published evaluation on the Hugging Face model page and the IMO-ProofBench benchmark. The same achievement was matched by Google DeepMind's Gemini Deep Think and an experimental OpenAI model, with all three reaching gold level independently. DeepSeekMath-V2 is the only open-source model to achieve this, making it the most capable freely available mathematical AI in the world. It also scored 118/120 on the Putnam 2024 exam and gold at CMO 2024.

What is DeepSeekMath-V2 and how is it different from R1?+

DeepSeek-R1 is a general-purpose reasoning model (January 2025) that uses reinforcement learning to develop chain-of-thought reasoning across math, coding, and logic. It achieves 97.3% on MATH-500 and 89.3% on AIME 2025. DeepSeekMath-V2 (November 2025) is a specialized model built specifically for formal mathematical proof generation and verification, built on DeepSeek-V3.2-Exp-Base. Its key innovation is self-verifiable reasoning — it trains a proof verifier alongside the proof generator, allowing it to check and revise its own proofs before finalizing. This makes it better at hard olympiad proofs where final-answer RL alone is insufficient. For competition math problems with numeric answers: use R1 or V4-Pro Think Max. For formal theorem proving and IMO-level proof construction: use DeepSeekMath-V2.

How do I use DeepSeek for hard math problems?+

Three key steps: (1) Enable Think Max — toggle DeepThink in chat, or via API set extra_body={"thinking":{"type":"enabled","budget":"max"}}. (2) State the problem precisely — include all constraints, the domain (e.g. "number theory", "real analysis"), and what output format you want (numeric answer, full proof, sketch). (3) Add a self-check instruction — end your prompt with "verify your solution and check for errors before finalizing." Also: set max_tokens=65536 in the API for hard problems that need extended reasoning chains. Do not add "think step by step" — DeepThink mode already does this, and adding it can interfere.

Can DeepSeek Math help with university-level mathematics?+

Yes, across all standard undergraduate and many graduate topics: real analysis (ε-δ proofs, uniform continuity, Riemann integration), abstract algebra (group theory, ring theory, Galois theory), linear algebra (eigenvalue analysis, Jordan normal form, spectral theory), topology (metric spaces, continuity, compactness), complex analysis (Cauchy's theorem, residues, conformal maps), probability theory (measure-theoretic foundations, martingales), and more. It works best when you specify the level of rigor required and the course context. For graduate research mathematics involving novel results, treat it as an assistant that generates ideas and checks arguments — not as a source of truth for unpublished mathematical claims.

Is DeepSeek Math better than Wolfram Alpha or Mathematica?+

Different tools for different tasks. Wolfram Alpha and Mathematica excel at: exact symbolic computation, definite integral evaluation, series expansions, solving differential equations algorithmically, and numerical computation to arbitrary precision — these are deterministic, algorithmic operations where computer algebra systems are exact. DeepSeek Math excels at: understanding mathematical problems stated in natural language, constructing human-readable proofs, explaining mathematical concepts, working through multi-step reasoning that requires insight rather than algorithm, and handling olympiad problems where symbolic computation isn't the bottleneck. For a production math pipeline, the ideal setup combines both: use DeepSeek for reasoning and proof structure, and Wolfram/Mathematica for exact numerical verification.

How do I run DeepSeekMath-V2 locally?+

DeepSeekMath-V2 is built on DeepSeek-V3.2-Exp-Base and follows the same inference setup. Download from huggingface.co/deepseek-ai/DeepSeek-Math-V2. For inference support, refer to the DeepSeek-V3.2-Exp GitHub repository. The model requires significant GPU memory — similar to V3.2-Exp (multiple H100s for the full model). The Apache 2.0 license permits commercial use, fine-tuning, and distribution. For most individual researchers and students, using the free chat interface at chat.deepseek.com with Expert Mode + DeepThink is more practical than self-hosting, and provides V4-Pro quality at zero cost.

What competition math levels does DeepSeek cover?+

DeepSeek covers all major international competition levels: AMC 10/12 (high school, ~90-95% accuracy), AIME (89.3% Pass@1 on 2025 problems), USAMO/USAMO-style olympiad (strong proof generation), Putnam (118/120 near-perfect), CMO 2024 (Gold), IMO 2025 (Gold, 5/6 problems). For competition training, Think Max mode produces full solution writeups that match competition proof standards — not just numeric answers. Generate 10 similar problems in the same competition style with a single prompt to build a practice set.

Get Started

Ready to solve harder problems?

IMO Gold-level mathematical reasoning, free to use. Open-source weights for researchers. No subscription. No credit card. Start thinking mathematically at the frontier of what AI can do.

Try DeepThink Math Free → 🤗 DeepSeekMath-V2 GitHub ↗