EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

DeepSeek

DeepSeek R1 Distill Qwen 14B

DeepSeek R1 Distill Qwen 14B is a DeepSeek model profile, from a family known for strong reasoning, coding, and cost-conscious API models such as R1 and V-series releases. This page separates the public release context from measured benchmark performance so you can inspect its quality and efficiency directly.

DeepSeek release notes

Operational Metrics

Output Speedn/a
First Tokenn/a
Blended Price-

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseJan 20, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: MATH-500

Rank #42 across 201 models.

94.9%

Strength: AIME

Rank #47 across 194 models.

66.7%

Watch Area: HLE

Rank #357 across 474 models.

4.4%

Watch Area: GPQA

Rank #348 across 478 models.

48.4%

Watch Area: SciCode

Rank #329 across 472 models.

23.9%

Strength Profile

Percentile score by analysis domain.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall15.8#301
Artificial Analysis Math Indexmath55.7#129
MMLU-Proreasoning74.0%#191
GPQAreasoning48.4%#348
Humanity's Last Exam
reasoning
4.4%
#357
LiveCodeBenchcoding37.6%#192
SciCodecoding, reasoning23.9%#329
MATH-500math94.9%#42
AIMEmath66.7%#47