EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek V3 0324

DeepSeek V3 0324 is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price$1.25/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseMar 25, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: MMLU-Pro

Rank #78 across 345 models.

81.9%

Strength: MATH-500

Rank #47 across 201 models.

94.2%

Watch Area: Input $

Rank #224 across 325 models.

$1.20/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall22.3#219
Artificial Analysis Coding Indexcoding22.0#179
Artificial Analysis Math Indexmath41.0#157
MMLU-Proreasoning81.9%#78
GPQA
reasoning
65.5%
#243
Humanity's Last Examreasoning5.2%#269
LiveCodeBenchcoding40.5%#178
SciCodecoding, reasoning35.8%#177
MATH-500math94.2%#47
AIMEmath52.0%#54
Blended Pricecost$1.25/M#196
Input Pricecost$1.20/M#224
Output Pricecost$1.25/M#125
Value Indexcost, overall17.8#198