EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Magistral Small 1.2

Magistral Small 1.2 is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed100.3 tok/s
First Token0.35s
Blended Price$0.750/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 17, 2025
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #13 across 293 models.

0.35s

Strength: LCB

Rank #60 across 343 models.

72.3%

Strength: Math

Rank #65 across 269 models.

80.3

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall18.2#265
Artificial Analysis Coding Indexcoding14.8#242
Artificial Analysis Math Indexmath80.3#65
MMLU-Proreasoning76.8%#157
GPQA
reasoning
66.3%
#236
Humanity's Last Examreasoning6.1%#237
LiveCodeBenchcoding72.3%#60
SciCodecoding, reasoning35.2%#189
Output Speedspeed100.3 tok/s#118
Time to First Tokenspeed0.35s#13
Blended Pricecost$0.750/M#143
Input Pricecost$0.500/M#166
Output Pricecost$1.50/M#139
Value Indexcost, overall24.3#176