EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral Medium 3.1

Mistral Medium 3.1 is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed56.3 tok/s
First Token0.59s
Blended Price$0.800/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseAug 12, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #66 across 293 models.

0.59s

Watch Area: HLE

Rank #363 across 474 models.

4.4%

Watch Area: Speed

Rank #205 across 293 models.

56.3 tok/s

Watch Area: MMLU-Pro

Rank #235 across 345 models.

68.3%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall21.3#227
Artificial Analysis Coding Indexcoding18.3#208
Artificial Analysis Math Indexmath38.3#163
MMLU-Proreasoning68.3%#235
GPQA
reasoning
58.8%
#285
Humanity's Last Examreasoning4.4%#363
LiveCodeBenchcoding40.6%#176
SciCodecoding, reasoning33.8%#205
Output Speedspeed56.3 tok/s#205
Time to First Tokenspeed0.59s#66
Blended Pricecost$0.800/M#150
Input Pricecost$0.400/M#156
Output Pricecost$2.00/M#153
Value Indexcost, overall26.6#164