EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral Small 4 (Reasoning)

Mistral Small 4 (Reasoning) is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed149.5 tok/s
First Token0.64s
Blended Price$0.263/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseMar 16, 2026
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: Value

Rank #40 across 323 models.

105.7

Strength: Speed

Rank #59 across 293 models.

149.5 tok/s

Strength: Input $

Rank #71 across 325 models.

$0.150/M

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall27.8#161
Artificial Analysis Coding Indexcoding24.3#157
GPQAreasoning76.9%#134
Humanity's Last Examreasoning9.5%#164
SciCode
coding, reasoning
38.0%
#134
Output Speedspeed149.5 tok/s#59
Time to First Tokenspeed0.64s#76
Blended Pricecost$0.263/M#73
Input Pricecost$0.150/M#71
Output Pricecost$0.600/M#86
Value Indexcost, overall105.7#40