EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek V3.2 Exp (Reasoning)

DeepSeek V3.2 Exp (Reasoning) is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price$0.315/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 29, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: LCB

Rank #31 across 343 models.

78.9%

Strength: MMLU-Pro

Rank #32 across 345 models.

85.0%

Strength: Value

Rank #41 across 323 models.

104.4

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall32.9#121
Artificial Analysis Coding Indexcoding33.3#95
Artificial Analysis Math Indexmath87.7#40
MMLU-Proreasoning85.0%#32
GPQA
reasoning
79.7%
#103
Humanity's Last Examreasoning13.8%#102
LiveCodeBenchcoding78.9%#31
SciCodecoding, reasoning37.7%#137
Blended Pricecost$0.315/M#91
Input Pricecost$0.280/M#122
Output Pricecost$0.420/M#66
Value Indexcost, overall104.4#41