EBEasy BenchmarksLLM model index
Workspace
Overview
Benchmarks
Benchmarks list
Overall Index
Coding
Math
MMLU-Pro
Speed
Value
Models
All models
GPT-5.5 (xhigh)
GPT-5.5 (high)
Claude Opus 4.7 (Adaptive Reasoning, Max Effort)
Gemini 3.1 Pro Preview
GPT-5.4 (xhigh)
Artificial Analysis data
Back

Mistral

Mistral Small (Feb '24)

Mistral Small (Feb '24) is a Mistral AI model profile from a lineup that spans compact Ministral models, specialist coding models, reasoning-oriented Magistral releases, and larger frontier open models. The local data makes its multilingual, coding, cost, and latency tradeoffs easier to compare against other providers.

Introducing Mistral 3

Operational Metrics

Output Speed130.6 tok/s
First Token0.61s
Blended Price$1.50/M

Model Metadata

Queryable facts extracted from the upstream model payload.

ReleaseFeb 26, 2024
Context Windown/a
Modalitiesn/a
API fields: release_date

Strength: TTFT

Rank #71 across 293 models.

0.61s

Watch Area: GPQA

Rank #441 across 478 models.

30.2%

Watch Area: AIME

Rank #179 across 194 models.

0.7%

Watch Area: MMLU-Pro

Rank #314 across 345 models.

41.9%

Strength Profile

Percentile score by analysis domain.

* Cost is inverted: lower input, output, and blended prices rank higher.

Benchmark Percentiles

Higher bars mean stronger relative placement.

All Benchmarks

MetricDomainValueRank
Artificial Analysis Intelligence Indexoverall9.0#442
MMLU-Proreasoning41.9%#314
GPQAreasoning30.2%#441
Humanity's Last Examreasoning4.4%#364
LiveCodeBench
coding
11.1%
#312
SciCodecoding, reasoning13.4%#403
MATH-500math56.2%#164
AIMEmath0.7%#179
Output Speedspeed130.6 tok/s#81
Time to First Tokenspeed0.61s#71
Blended Pricecost$1.50/M#208
Input Pricecost$1.00/M#220
Output Pricecost$3.00/M#200
Value Indexcost, overall6.0#277