EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek R1 Distill Llama 70B

DeepSeek R1 Distill Llama 70B is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speed44 tok/s
First Token0.46s
Blended Price$0.875/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseJan 20, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #33 across 293 models.

0.46s

Strength: AIME

Rank #46 across 194 models.

67.0%

Watch Area: Speed

Rank #243 across 293 models.

44 tok/s

Watch Area: GPQA

Rank #391 across 478 models.

40.2%

Watch Area: LCB

Rank #247 across 343 models.

26.6%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall16.0#295
Artificial Analysis Coding Indexcoding11.4#293
Artificial Analysis Math Indexmath53.7#133
MMLU-Proreasoning79.5%#123
GPQA
reasoning
40.2%
#391
Humanity's Last Examreasoning6.1%#234
LiveCodeBenchcoding26.6%#247
SciCodecoding, reasoning31.2%#231
MATH-500math93.5%#52
AIMEmath67.0%#46
Output Speedspeed44 tok/s#243
Time to First Tokenspeed0.46s#33
Blended Pricecost$0.875/M#166
Input Pricecost$0.700/M#195
Output Pricecost$1.05/M#113
Value Indexcost, overall18.3#197