EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Ministral 3 14B

Ministral 3 14B is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed121.6 tok/s
First Token0.35s
Blended Price$0.200/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseDec 2, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #12 across 293 models.

0.35s

Strength: Output $

Rank #23 across 325 models.

$0.200/M

Strength: Blended $

Rank #60 across 325 models.

$0.200/M

Watch Area: Coding

Rank #304 across 410 models.

10.9

Watch Area: HLE

Rank #345 across 474 models.

4.6%

Watch Area: SciCode

Rank #331 across 472 models.

23.6%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall16.0#298
Artificial Analysis Coding Indexcoding10.9#304
Artificial Analysis Math Indexmath30.0#185
MMLU-Proreasoning69.3%#228
GPQA
reasoning
57.2%
#296
Humanity's Last Examreasoning4.6%#345
LiveCodeBenchcoding35.1%#198
SciCodecoding, reasoning23.6%#331
Output Speedspeed121.6 tok/s#95
Time to First Tokenspeed0.35s#12
Blended Pricecost$0.200/M#60
Input Pricecost$0.200/M#89
Output Pricecost$0.200/M#23
Value Indexcost, overall80.0#70