EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral Small 3.1

Mistral Small 3.1 is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed138.8 tok/s
First Token0.52s
Blended Price$0.150/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseMar 17, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Blended $

Rank #39 across 325 models.

$0.150/M

Strength: Output $

Rank #43 across 325 models.

$0.300/M

Strength: Input $

Rank #44 across 325 models.

$0.100/M

Watch Area: Math

Rank #252 across 269 models.

3.7

Watch Area: LCB

Rank #266 across 343 models.

21.2%

Watch Area: GPQA

Rank #363 across 478 models.

45.4%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall14.5#332
Artificial Analysis Coding Indexcoding13.9#261
Artificial Analysis Math Indexmath3.7#252
MMLU-Proreasoning65.9%#247
GPQA
reasoning
45.4%
#363
Humanity's Last Examreasoning4.8%#321
LiveCodeBenchcoding21.2%#266
SciCodecoding, reasoning26.5%#300
MATH-500math70.7%#140
AIMEmath9.3%#139
Output Speedspeed138.8 tok/s#70
Time to First Tokenspeed0.52s#51
Blended Pricecost$0.150/M#39
Input Pricecost$0.100/M#44
Output Pricecost$0.300/M#43
Value Indexcost, overall96.7#50