EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek V4 Pro (Reasoning, Max Effort)

DeepSeek V4 Pro (Reasoning, Max Effort) is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speed34.3 tok/s
First Token1.24s
Blended Price$2.18/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseApr 24, 2026
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: HLE

Rank #11 across 474 models.

35.9%

Strength: Overall

Rank #15 across 500 models.

51.5

Strength: GPQA

Rank #19 across 478 models.

88.8%

Watch Area: Speed

Rank #269 across 293 models.

34.3 tok/s

Watch Area: Input $

Rank #255 across 325 models.

$1.74/M

Watch Area: Blended $

Rank #227 across 325 models.

$2.18/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall51.5#15
Artificial Analysis Coding Indexcoding47.5#17
GPQAreasoning88.8%#19
Humanity's Last Examreasoning35.9%#11
SciCode
coding, reasoning
50.0%
#19
Output Speedspeed34.3 tok/s#269
Time to First Tokenspeed1.24s#177
Blended Pricecost$2.18/M#227
Input Pricecost$1.74/M#255
Output Pricecost$3.48/M#208
Value Indexcost, overall23.7#179