EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Magistral Medium 1.2

Magistral Medium 1.2 is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed42 tok/s
First Token0.48s
Blended Price$2.75/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 18, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: LCB

Rank #47 across 343 models.

75.0%

Strength: TTFT

Rank #41 across 293 models.

0.48s

Strength: Math

Rank #60 across 269 models.

82.0

Watch Area: Speed

Rank #248 across 293 models.

42 tok/s

Watch Area: Input $

Rank #272 across 325 models.

$2.00/M

Watch Area: Value

Rank #248 across 323 models.

9.9

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall27.1#169
Artificial Analysis Coding Indexcoding21.7#183
Artificial Analysis Math Indexmath82.0#60
MMLU-Proreasoning81.5%#86
GPQA
reasoning
73.9%
#165
Humanity's Last Examreasoning9.6%#160
LiveCodeBenchcoding75.0%#47
SciCodecoding, reasoning39.2%#113
Output Speedspeed42 tok/s#248
Time to First Tokenspeed0.48s#41
Blended Pricecost$2.75/M#239
Input Pricecost$2.00/M#272
Output Pricecost$5.00/M#229
Value Indexcost, overall9.9#248