EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral Small (Sep '24)

Mistral Small (Sep '24) is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed135 tok/s
First Token0.59s
Blended Price$0.300/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseSep 17, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #64 across 293 models.

0.59s

Strength: Blended $

Rank #81 across 325 models.

$0.300/M

Watch Area: LCB

Rank #294 across 343 models.

14.1%

Watch Area: SciCode

Rank #395 across 472 models.

15.6%

Watch Area: GPQA

Rank #400 across 478 models.

38.1%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall10.2#416
MMLU-Proreasoning52.9%#288
GPQAreasoning38.1%#400
Humanity's Last Examreasoning4.3%#377
LiveCodeBench
coding
14.1%
#294
SciCodecoding, reasoning15.6%#395
MATH-500math56.3%#163
AIMEmath6.3%#151
Output Speedspeed135 tok/s#75
Time to First Tokenspeed0.59s#64
Blended Pricecost$0.300/M#81
Input Pricecost$0.200/M#90
Output Pricecost$0.600/M#84
Value Indexcost, overall34.0#133