EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek V3 (Dec '24)

DeepSeek V3 (Dec '24) is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price$0.625/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseDec 26, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Watch Area: HLE

Rank #444 across 474 models.

3.6%

Watch Area: Math

Rank #195 across 269 models.

26.0

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall16.5#287
Artificial Analysis Coding Indexcoding16.4#227
Artificial Analysis Math Indexmath26.0#195
MMLU-Proreasoning75.2%#173
GPQA
reasoning
55.7%
#306
Humanity's Last Examreasoning3.6%#444
LiveCodeBenchcoding35.9%#194
SciCodecoding, reasoning35.4%#184
MATH-500math88.7%#80
AIMEmath25.3%#92
Blended Pricecost$0.625/M#129
Input Pricecost$0.400/M#152
Output Pricecost$0.890/M#105
Value Indexcost, overall26.4#166